Okay, so no one will buy into a brand new ICT technology without doing some major research and testing, but there’s little doubt that organisations are under increasing pressure to deploy new solutions in a much shorter timeframe than previously. And I’m not talking about the actual time taken between acquiring and commissioning some new servers or a new application. No, I mean that, where once a technology breakthrough would take years to gain wide-scale acceptance (think about how long Solid State Disk was around before it ‘hit the big time’), because end users were ‘suspicious’ of the new thinking (and yes, the high price tag played a part as well); now, no sooner has something been launched, than there’s real pressure on organisations to start deploying it, if only to stay competitive in the market.
Now, the ‘old-fashioned’ way, as typified by the delightful term ‘server-hugging’, seems remarkably sluggish and uncommercial to our digital eyes. Why hold on to slow, unreliable and very costly bits of metal, when they are being replaced by much faster, much more reliable and less expensive alternatives? Well, in truth, the fear culture held sway. Buying tried and tested technology was the safe option. It might not be the best value for money, but it had served the organisation well for so many years, why ‘risk’ change?
And to the IT traditionalists, the modern way of investing in any and every latest technology development smacks of a high risk strategy that could fall apart at the seams if the solutions prove to be rather underwhelming once the hype surrounding them starts to fall apart.
Everyone talks about the tech start-ups who turned the received wisdom on its head and succeeded on a spectacular scale. And almost every industry sector now has one or more major player that is a pure play digital company, as opposed to a legacy organisation that is trying to get to grips with the online, always connected world. So, embracing new ideas and technologies clearly can work, but it’s worth remembering that many of these digital-only organisation were a long time in the planning, and required significant investment funding, plenty of which is still to be paid back.
In summary, there are no short cuts to business success in the digital world, and the recent example of Uber in London makes it clear that people are just as crucial as technology when it comes to business transformation. So, don’t be panicked into making unwise technology investments. Take the time to understand your market, what your customers are wanting from you now, and only then begin to acquire the technologies and personnel to make sure that your business stays relevant, or, better still, leads your industry.
One in four IT organizations in Europe is already operating tiered applications spanning on- and off-premise environments. However, only a few pathfinders are making the necessary technology and process adjustments to make this viable in the long run, according to a recent IDC survey of more than 800 IT and line-of-business decision makers in 11 European countries.
IDC confirmed that the most typical environment spanning across clouds involves front-end applications hosted on a public cloud, connecting to back-end systems located on-premise — 31% of the sample reported this approach. The other approach was "bursting" capacity into off-premise (8% of the sample, but fast-growing compared with past research). 40% of the respondents segregated on- versus off-premise environments and 20% said they only run applications on-premise.
"Connecting cloud environments with ad hoc bridges in a hybrid fashion won't be enough in 2018. Nor will standardizing on one external provider, at least for large or innovative companies. Developers and line of business require 'best of breed,' and the purchasing department wants to avoid being locked in," said Giorgio Nebuloni, research director, IDC European Infrastructure Group.
According to the survey, carried out in May 2017, only 20% of the line-of-business representatives interviewed agreed that standardizing on one or two large IaaS/PaaS or SaaS providers would work, versus more than 30% among IT respondents. Adding a software layer to protect against cloud lock-in was the most commonly adopted strategy, according to both IT and line of business.
"Digital innovators such as ING Bank and Siemens want to consume cloud content from several cloud locations while maintaining maximum flexibility. This gives incredible freedom to users, but it creates challenges for CIOs. We believe a multicloud strategy based on hiring staff with negotiation skills, expanding investments in automation software, and revising cross-country connectivity options is a must for IT departments supporting innovative organizations," said Nebuloni.
"We expect several new technologies and strategies to further digitalize organizations by transforming the supply chain," said Noha Tohamy, research vice president and distinguished analyst at Gartner. "In this year’s Hype Cycle (see Figure 1) we’ve picked out several technologies and strategies that should be on the supply chain leader’s radar, as they will reach maturity in the next five years."
Figure 1. Hype Cycle for Supply Chain Strategy, 2017
Source: Gartner (September 2017)
At the Peak
Gartner expects two entries to enter mainstream use in the next two to five years:
Social learning platforms address the large number of long-term employees retiring, capturing their knowledge to share with younger workers in a way that can be scaled across multiple business units and geographies. A more fluid and continuous learning experience that appeals to younger employees has obvious benefits, but chief supply chain officers (CSCOs) should avoid a siloed ‘shadow IT’ approach. Instead, they should integrate social learning within the context of an organization-wide IT program to maximize the benefits, but to be feasible for most organizations will take between two to five years.
Solution-centric supply chains (SCSCs) offer customers a personalized collection of products, data and services from a digitally-enabled ecosystem of partners. It’s an approach seen mainly in high-tech, medical, consumer and industrial sectors at the current time.
In the Trough
In this phase Gartner expects big data to achieve mainstream maturity in the next two to five years.
Some supply chain organizations have piloted big data technologies to use larger datasets, and others are incorporating structured data from external sources or trading partners in areas like collaborative demand fulfillment and supplier performance management.
There is a now a post-hype realization that more data does not necessarily equate to better insights. Today, big data is seen as an enabler, but organizations are focusing on improving analytics and integration to drive big data strategies to productive mainstream use.
On the Slope
There are five technologies on the slope that Gartner expects to mature fully within the next five years.
Supply chain visibility (SCV) and centers of excellence (COEs) are competencies that will soon be standard business practice. SCV is about generating timely, accurate and complete views of plans, events and data across the entire supply chain including external partners. Many organizations currently lack an end-to-end approach to SCV, but as more mature and capable Internet of Things (IoT), data and analytics solutions become available, SCV will head toward mass adoption within two to five years.
A COE develops ways to find, design, develop and implement best practices across the business. While Gartner research indicates that 78 percent of supply chain organizations have one or more COE, there are reasons why the COE still hasn’t reached the plateau. One is that where COEs have been adopted, they often lack structure due to weak mandates, uncertain missions and lack of clear governance and performance metrics. As more organizations advance their expertise, Gartner expects the COE to move to productive mainstream use within two years.
Diagnostic analytics in the supply chain seeks to explain why something — an event or a trend — happened. Diagnostic analytics lags the adoption of descriptive analytics, which has reached the plateau phase of the cycle. This is because diagnostic analytics requires a clear understanding of the intertwined relationships in a supply chain, which must first be provided by descriptive analytics. Improvements in the maturity of analytics solutions are now contributing to wider adoption, as well as the increased availability and integration of real-time data in IoT-enabled supply chains.
Supply chain management (SCM) business process as a service (BPaaS) is an external service that delivers standardized processes through a cloud-sourced technology platform. Examples include compliance and regulatory reporting, freight forwarding, customs processing and aftermarket services.
The time is right for supply chain leaders to monitor the SCM BPaaS market for opportunities to gain incremental capabilities and efficiencies in their organization without needing to license new software or hire new employees.
Targeted supply chain segmentation is a technique used in business for decade. Examples of segmentation include categorizing customers or suppliers as high priority or treating parts or inventory differently based on volume. Gartner expects that within two to five years there will exist a documented consensus approach to targeted supply chain segmentation that will drive mainstream adoption.
Reached the Plateau
Descriptive analytics is the application of analytics to describe what is happening, or has happened. It is the only technology that has reached the plateau of productivity in this Hype Cycle.
While some organizations still use enterprise wide business intelligence tools from other business units like sales or finance, generating reports this way is too time intensive and does not provide the right level of insight in a timely manner.
Descriptive analytics capabilities spanning reporting, dashboards, supply chain visibility, data visualization and alerts are already improving the level of insight for many organizations, meaning mainstream adoption is less than two years away.
"Looking further out than five years, we can even more exciting technologies coming over the horizon," said Ms. Tohamy. "We expect that artificial intelligence, machine learning, corporate social responsibility and cost-to-serve analytics will all drive significant shifts in supply chain strategies within the next decade."
IoT boosting spending on storage, edge, infrastructure and cloud.
451 Research unveils the latest trends in IoT use and finds that the massive amounts of data generated by IoT is already having a significant impact on enterprise IT. In the latest Voice of the Enterprise: IoT – Workloads and Key Projects, analysts find that organizations deploying IoT are planning increases in storage capacity (32.4%), network edge equipment (30.2%), server infrastructure (29.4%) and off-premises cloud infrastructure (27.2%) in the next 12 months, to help manage the IoT data storm.
Analysts find that spending on IoT projects remains solid, with 65.6% of respondents planning to increase their spending in the next 12 months and only 2.7% planning a reduction.
Today, IT-centric projects are the dominant IoT use cases, particularly datacenter management and surveillance and security monitoring. Two years out, however, facilities automation will likely be the most popular use case, and line-of-business-centric supply chain management is expected to jump from number 6 to number three.
Finding IoT-skilled workers remains a challenge since the last IoT survey in 2016, with almost half of respondents saying they face a skills shortage for IoT-related tasks. Data analytics, security and virtualization capabilities are the skills most in demand.
451 Research finds that the collection, storage, transport and analysis of IoT data is impacting all aspects of IT infrastructure. Most companies say they initially store (53.1%) and analyze (59.1%) IoT data at a company-owned datacenter. IoT data remains stored there for two-thirds of organizations, while nearly one-third of the respondents move the data to a public cloud.
Researchers find that, once IoT data moves beyond operational and real-time uses and the focus is on historical use cases such as regulatory reporting and trend analysis, cloud storage gives organizations greater flexibility and often significant cost savings for the long term.
Despite this centralization of IoT data, the survey also finds action at the edge. Just under half of respondents say they do IoT data processing – including data analysis, data aggregation or data filtering – at the edge, either on the IoT device (22.2%) or in nearby IT infrastructure (23.3%).
“Companies are processing IoT workloads at the edge today to improve security, process real-time operational action triggers, and reduce IoT data storage and transport requirements,” said Rich Karpinski, Research Director for Voice of the Enterprise: Internet of Things. “While some enterprises say that in the future they will do more analytics – including heavy data processing and analysis driven by big data or AI – at the network edge, for now that deeper analysis is happening in company-owned datacenters or in the public cloud.”
The European Round Table of Industrialists (ERT) has published its report on building and transforming skills for a digital world.
Gigamon VP of Corporate Marketing, Julie Gibbs, discusses how Gigamon's Visibility Platform is the key to securing, managing & understanding the traffic on your network.
Rapid growth in cloud adoption is driving increased interest in securing data, applications and workloads that now exist in a cloud computing environment. The Gartner, Inc. Hype Cycle for Cloud Security helps security professionals understand which technologies are ready for mainstream use, and which are still years away from productive deployments for most organizations (see Figure 1.)
"Security continues to be the most commonly cited reason for avoiding the use of public cloud," said Jay Heiser, research vice president at Gartner. "Yet paradoxically, the organizations already using the public cloud consider security to be one of the primary benefits."
The attack resistance of the majority of cloud service providers has not proven to be a major weakness so far, but customers of these services may not know how to use them securely. "The Hype Cycle can help cybersecurity professionals identify the most important new mechanisms to help their organizations make controlled, compliant and economical use of the public cloud," added Mr. Heiser.
Figure 1. Hype Cycle for Cloud Security, 2017
At the Peak
The peak of inflated expectations is a phase of overenthusiasm and unrealistic projections, where the hype is not matched by successful deployments in mainstream use. This year the technologies at the peak include data loss protection for mobile devices, key management as-a-service and software-defined perimeter. Gartner expects all of these technologies will take at least five years to reach productive mainstream adoption.
In the Trough
When a technology does not live up to the hype of the peak of inflated expectations, it becomes unfashionable and moves along the cycle to the trough of disillusionment. There are two technologies in this section that Gartner expects to achieve mainstream adoption in the next two years:
Disaster recovery as a service (DRaaS) is in the early stages of maturity, with around 20-50 percent market penetration. Early adopters are typically smaller organizations with fewer than 100 employees, which lacked a recovery data center, experienced IT staff and specialized skills needed to manage a DR program on their own.
Private cloud computing is used when organizations want to the benefits of public cloud — such as IT agility to drive business value and growth — but aren’t able to find cloud services that meet their needs in terms of regulatory requirements, functionality or intellectual property protection. The use of third-party specialists for building private clouds is growing rapidly because the cost and complexity of building a true private cloud can be high.
On the Slope
The slope of enlightenment is where experimentation and hard work with new technologies are beginning to pay off in an increasingly diverse range of organizations. There are currently two technologies on the slope that Gartner expects to fully mature within the next two years:
Data loss protection (DLP) is perceived as an effective way to prevent accidental disclosure of regulated information and intellectual property. In practice, it has proved more useful in helping identify undocumented or broken business processes that lead to accidental data disclosures, and providing education on policies and procedures. Organizations with realistic expectations find this technology significantly reduces unintentional leakage of sensitive data. It is relatively easy, however, for a determined insider or motivated outsider to circumvent.
Infrastructure as a service (IaaS) container encryption is a way for organizations to protect their data held with cloud providers. It’s a similar approach to encrypting a hard drive on a laptop, but it is applied to the data from an entire process or application held in the cloud. This is likely to become an expected feature offered by a cloud provider and indeed Amazon already provides its own free offering, while Microsoft supports free BitLocker and DMcrypt tools for Linux.
Reached the Plateau
Four technologies have reached the plateau of productivity, meaning the real-world benefits of the technology have been demonstrated and accepted. Tokenization, high-assurance hypervisors and application security as a service have all moved up to the plateau, joining identity-proofing services which was the only entrant remaining from last year’s plateau.
"Understanding the relative maturity and effectiveness of new cloud security technologies and services will help security professionals reorient their role toward business enablement," said Mr. Heiser. "This means helping an organization’s IT users to procure, access and manage cloud services for their own needs in a secure and efficient way."
Total worldwide enterprise storage systems factory revenue was up 2.9% year over year and reached $10.8 billion in the second quarter of 2017 (2Q17), according to the International Data Corporation (IDC) Worldwide Quarterly Enterprise Storage Systems Tracker.
Total capacity shipments were up 16.5% year over year to 65.3 exabytes during the quarter. Revenue growth increased within the group of original design manufacturers (ODMs) that sell directly to hyperscale datacenters. This portion of the market was up 73.5% year over year to $2.5 billion. Sales of server-based storage declined 13.4% during the quarter and accounted for $2.9 billion in revenue. External storage systems remained the largest market segment, but the $5.3 billion in sales represented a decline of 5.4% year over year.
"The enterprise storage market finished the second quarter of 2017 on a positive note, posting modest year-over-year growth and the first overall growth in several quarters," said Liz Conner, research manager, Storage Systems. "Traditional storage vendors continue to expand their product portfolios to take advantage of the market swing towards All Flash and converged/hyperconverged systems. Meanwhile, hyperscalers saw new storage initiatives and event-driven storage requirements lead to strong growth in this segment during the second quarter."
2Q17 Total Enterprise Storage Systems Market Results, by Company
HPE/New H3C Group held the number 1 position within the total worldwide enterprise storage systems market, accounting for 20.1% of spending. Dell Inc held the next position with a 18.4% share of revenue during the quarter. NetApp finished third with 6.4% market share. IBM finished in finished in fourth position, capturing 5.2% of global spending, and Hitachi rounded out the top 5 with 3.8% market share. As a single group, storage systems sales by original design manufacturers (ODMs) selling directly to hyperscale datacenter customers accounted for 23.3% of global spending during the quarter.
Top 5 Vendor Groups, Worldwide Total Enterprise Storage Systems Market, Second Quarter of 2017 (Revenues are in US$ millions) | |||||
Company | 2Q17 Revenue | 2Q17 Market Share | 2Q16 Revenue | 2Q16 Market Share | 2Q17/2Q16 Revenue Growth |
1. HPE/New H3C Group b c | $2,170.3 | 20.1% | $2,500.2 | 23.8% | -13.2% |
2. Dell Inc a | $1,993.0 | 18.4% | $2,717.1 | 25.9% | -26.7% |
3. NetApp | $694.6 | 6.4% | $595.4 | 5.7% | 16.7% |
4. IBM | $566.3 | 5.2% | $568.5 | 5.4% | -0.4% |
5. Hitachi | $412.6 | 3.8% | $429.0 | 4.1% | -3.8% |
ODM Direct* | $2,520.2 | 23.3% | $1,452.6 | 13.8% | 73.5% |
Others | $2,451.3 | 22.7% | $2,238.1 | 21.3% | 9.5% |
All Vendors | $10,808.2 | 100.0% | $10,500.9 | 100.0% | 2.9% |
Source: IDC Worldwide Quarterly Enterprise Storage Systems Tracker, September 14, 2017 |
Inaugural quarterly report from DigitalOcean called DigitalOcean Currents reveals developers’ key priorities.
A new quarterly report from DigitalOcean, the cloud for developers highlights that despite mainstream attention around multi-cloud solutions, 70 percent of respondents have no plans to implement it in the next year.Despite Gartner predicting that 90 percent of organisations will adopt hybrid infrastructure management by 2020, the survey of 1,000 respondents from across the world reveals the reality of developers and system admins working in wide range of industries.
“There is a lot of available data on programming trends, but what is unique about DigitalOcean is that we are the only cloud platform provider that is truly developer-first,” said Shiven Ramji, VP of Product, DigitalOcean. “This gives us a unique perspective on how specific segments of the developer community are thinking and feeling with regard to the tools they like to use and where they want to spend their time.”
The survey also includes a special focus on developers’ use of and requirements for object storage, in conjunction with the company’s launch of its long-awaited object storage solution: Spaces. While the survey found that less than half (45 percent) of respondents are currently using object storage as a way to handle to explosive growths of different types of data, the majority (53 percent) have researched a solution in the past five years — indicating a growing general interest in this type of storage. Respondents said the three main benefits over other storage solutions are cost-effectiveness (30 percent), scalability (29 percent) and ease of data retrieval.
Additional findings from DigitalOcean Currents:
MySQL reigns supreme: For database software, MySQL is the most popular solution, with 35 percent preferring it, followed by PostgreSQL (26 percent) and MariaDB (19 percent); Linux Popularity: Developers and sysadmins spend more time using Linux (38.6 percent) compared to 36.4 percent who use MacOS and only 23.1 percent using Windows – showing the differences for the developer segment compared to the overall market share and reflecting the time they spend in server environments;Developers Prefer to Learn Online: Books used to be how many developers learned to code, but that’s no longer the case. Online tutorials and official documentation far outpace books as the preferred learning method for developers, with 80 percent preferring these resources.
Valec is an organisation which builds, maintains and operates Brazilian railways. The public company manages three railways: the North-South Railway, the West-East Integration Railway and the Centre-West Integration Railway. The organisation plays a major role in developing infrastructure across a very large territory in order to support the Brazillian economy.
Valec is currently in the process of building the North-South Railway, which when completed will run from Belém in the North to the southernmost city in Brazil, Río Grande, enabling the movement of valuable commodities, such as ethanol, soya and metals. 1575 km of this railway is already in operation, but Valec is still building round 700km of the track. A control centre facility, based in Palmas, coordinates work from Porto Nacional to Estrela d’Oeste, controlling the movement of maintenance vehicles, ensuring that all building and engineering tasks run according to plan. Trains are using sections of the track to transport goods, while mechanics, engineers, builders, and other workers are all posted at different points down the length of the track to undertake construction work and maintenance on specific sections of the railway. Numerous maintenance vehicles drive up and down the line, managing the railway, delivering vital raw materials and moving workers.
Considering the sheer distances involved and the scale of the work at hand, communication between their vehicles, trains and control centre is key. Despite this, Valec’s communications were previously hampered by intermittent terrestrial connectivity, older radio technology and a paper system, whereby drivers would be given a ‘license’ from the control centre, which would specify a beginning and end point for their journey and cargo.
This system was problematic on several levels – for one thing, it did not allow the control centre facility to have any real feedback on what its vehicles were doing. It also restricted the agility of Valec – rather than being able to react flexibly to changing events happening along the line and being able to adjust resource allocation accordingly by communicating with drivers, it instead left the team with only the paper-based system as a guarantee of their whereabouts. Moreover, not being able to see where drivers were in real time represented a safety and health issue, as these drivers were travelling very long distances to remote locations on a daily basis. For the operators of trains using these stretches of lines, there was also an economic cost involved, as trains – which can be up to 1km long - were using a lot of diesel stopping and starting again.
Suppliers of managed services are taking on more responsibilities as they become the driving force for IT industry growth, delegates at the Managed Services & Hosting Summit were told in London this week. More than two hundred managed services providers (MSPs) and aspiring providers of managed services were advised their need to become rounded providers of business productivity, by adopting the right mindset, getting more training on key issues and by working together to offer a more comprehensive range of services.
Mark Paine, Gartner Research Director told the conference that service providers had to take account of the changing attitudes of buyers by focusing on the business outcomes and the raised expectations among buyers now that IT has had to become a productivity asset for the business. And they need not try to do it all themselves - partners can co-operate to address the wider market requirements, he said. MSPs had to carefully choose go-to-market partners that can talk both technology and business; and they had to start with a vision; look at use cases; and consider processes, challenges and outcomes for their customers.
Customer acquisition costs (CAC) versus lifetime value (LTV) also had to be brought into the equation, considering margins, partnering agreements and questions as to who owns the invoice, plus the on-going up-selling/cross-selling opportunities. The rewards were there through continuing revenues and repeatable business, as successful MSPs have shown. The rewards for getting it right are substantial, Mark Paine says, including valuable access to customers' ecosystems, becoming a pivotal part of customers' success stories, and benefiting from lead sharing with technology partners.
The added benefit of recurring revenue from cloud services is substantial and this was echoed at the Summit by Michael Frisby, Managing Director of Cobweb Solutions, which sells cloud solutions from Microsoft, Mimecast, Acronis and others. Frisby said: “We became a cloud managed service provider from originally focusing on traditional solutions like Exchange, and 95% of our revenues are now recurring. But you have to make sure you get the marketing right to do it.” Continuing training and education of the MSP were essential, he added, pointing out that Cobweb itself had boosted its investment in this four-fold in the last year.
Other issues covered included the growing importance of compliance particularly with regard to GDPR: legal firm Fieldfisher's partner Renzo Marchini warned that the rules were changing and the impact on MSPs would be profound. Service providers are to be regarded as 'data controllers' under GDPR, with that comes the potential for huge fines in the event of regulation breaches.
The theme of partnering was confirmed later in a round-table discussion when the Global Technology Distribution Council’s European head, Peter van den Berg, outlined how distributors were investing heavily in services and education. Security was also a key issue, with the service provider very much in the firing line in the event of any incident. Datto revealed some of the latest research from its global survey: Business Development Director Chris Tate revealed that “87% of our partners have had to deal with a ransomware attack on behalf of their clients.” Earlier, SolarWinds MSP had shown how all-pervading the issue was becoming in discussions with customers, and how MSPs needed to increase their understanding, particularly in relation to the challenges faced by smaller businesses.
The Managed Services & Hosting Summit, now in its seventh year, was also a platform for other news: Kaspersky's Global Product Manager Oleg Gorobets made the case for its growing programme for better security among MSPs. But there was a general air of optimism with the prospect of vast fields of data and an increasingly information-rich environment produces new sales and growth opportunities for the MSP according to David Groves, Director of Product Management, Maintel speaking on behalf of sponsor Highlight.
M&A specialist Hampleton’s Director, David Riemenschneider, demonstrated that these opportunities and the surge in demand in such areas as AI and security could yield rich rewards for MSPs building value into their own business while addressing the need to create value for their clients. He pointed to heightened interest in services providers, especially those with security, financial services or automotive expertise.
Managed Services & Hosting Summit 2017 (www.mshsummit.com ) sponsors included: Datto, Highlight, Kaspersky, Mimecast, SolarWinds MSP, Autotask, Cisco Umbrella, ConnectWise, DataCore Software, ESET, ForcePoint, ITGlue, Kingston Technology, Nakivo, WatchGuard, 5NineSoftware, Altaro, APC by Schneider Electric, Beta Distribution, Continuum, Deltek, Egenera, F-Secure, Identity Maestro, iland, Barracuda MSP, Kaseya, RapidFire Tools, SpamExperts, Webroot and Wasabi.
Many of the themes and issues addressed in this week’s London event will be developed further in the second annual European Managed Services & Hosting Summit which, it has just been announced, will be staged in Amsterdam on 29 May 2018. (www.mshsummit.com/amsterdam )
Following assessment and validation from the panel at Angel Business Communications. The shortlist for the 24 categories in this year’s SVC Awards has been put forward for online voting by our readership.
Voting is free of charge and must be made online at www.svcawards.com
The SVC Awards celebrate achievements in Storage, Cloud and Digitalisation, rewarding the products, projects and services as well as honouring companies and teams. The SVC Awards recognise the achievements of end-users, channel partners and vendors alike and in the case of the end-user category there will also be an award made to the supplier who nominated the winning organisation.
Voting remains open until 3 November so there is time to make your vote count and express your opinion on the companies that you believe deserve recognition in the SVC arena.
The winners will be announced at a gala ceremony on 23 November at the Hilton London Paddington Hotel.
Welcoming both the quantity and quality of the 2017 SVC Awards shortlist entries, Jason Holloway, Director of IT Publishing & Events at Angel, said: “I’m delighted that we have this annual opportunity to recognise the innovation and success of a significant part of the IT community. The number of entries, and the quality of the projects, products and people they represent, demonstrate that the SVC Awards continue to go from strength to strength and fulfil an important role in highlighting and recognising much of the great work that goes on in the industry.”
All voting takes place on line and voting rules apply. Make sure you place your votes by 3 November when voting closes. Visit : www.svcawards.com
Storage Project of the Year
Cohesity supporting Colliers International
DataCore Software supporting Grundon Waste Management
Mavin Global supporting The Weetabix Food Company
Cloud / Infrastructure Project of the Year
Axess Systems supporting Nottingham Community Housing Association
Correlata Solutions supporting insurance company client
Navisite supporting Safeline
Hyper-convergence Project of the Year
HyperGrid supporting Tearfund
Pivto3 supporting Bone Consult
UK Managed Services Provider of the Year
EACS
EBC Group
Mirus IT Solutions
netConsult
Six Degrees Group
Storm Internet
Vendor Channel Program of the Year
NetApp
Pivot3
Veeam Software
International Managed Services Provider of the Year
Alert Logic
Claranet
Datapipe
Backup and Recovery / Archive Product of the Year
Acronis – Backup 12.5
Altaro Software – VM Backup
Arcserve - UDP
Databarracks – DraaS, BaaS, BCaaS solutions
Drobo – 5N2
NetApp – BaaS solution
Quest – Rapid Recovery
StorageCraft – Disaster Recovery Solution
Tarmin – GridBank
Cloud-specific Backup and Recovery / Archive Product of the Year
Acronis – Backup 12.5
CloudRanger – SaaS platform
Datto – Total Data Protection platform
StorageCraft – Cloud Services
Veeam Software - Backup & Replication v9.5
Storage Management Product of the Year
Open-E – JovianDSS
SUSE – Enterprise Storage 4
Tarmin – GridBank Data Management platform
Virtual Instruments – VirtualWisdom
Software Defined / Object Storage Product of the Year
Cloudian – HyperStore
DDN Storage – Web Object Scaler (WOS)
SUSE – Enterprise Storage 4
Software Defined Infrastructure Product of the Year
Anuta Networks – NCX 6.0
Cohesive Networks – VNS3
Runecast Solutions – Analyzer
Silver Peak – Unity EdgeConnect
SUSE – OpenStack Cloud 7
Hyper-convergence Solution of the Year
Pivot3 - Acuity Hyperconverged Software Platform
Scale Computing - HC3
Syneto - HYPERSeries 3000
Hyper-converged Backup and Recovery Product of the Year
Cohesity – DataProtect
ExaGrid - HCSS for Backup
Syneto - HYPERSeries 3000
PaaS Solution of the Year
CAST Highlight - CloudReady Index
Navicat – Premium
SnapLogic - Enterprise Integration Cloud
SaaS Solution of the Year
Adaptive Insights – Adaptive Suite
Impartner – PRM
IPC Systems - Unigy 360
Ixia - CloudLens Public
SaltDNA - Secure Enterprise Communications
x.news information technology gmbh – x.news
IT Security as a Service Solution of the Year
Alert Logic – Cloud Defender
Barracuda Networks - Essentials for Office 365
SaltDNA - Secure Enterprise Communications
Votiro - Content Disarm and Reconstruction technology
Cloud Management Product of the Year
CenturyLink - Cloud Application Manager
Geminaire - Resiliency Management Platform
Highlight - See Clearly - Business Performance Acceleration
HyperGrid – HyperCloud
Rubrik – CDM platform
SUSE - OpenStack Cloud 7
Zerto - Virtual Replication
Storage Company of the Year
Acronis
Altaro Software
DDN Storage
NetApp
Virtual Instruments
Cloud Company of the Year
Databarracks
Navisite
Six Degrees Group
Storm Internet
Hyper-convergence Company of the Year
Cohesity
Pivot3
Syneto
Storage Innovation of the Year
Acronis - Backup 12.5
Altaro Software - VM Backup for MSP’s
DDN Storage - Infinite Memory Engine
Excelero – NVMesh
Nexsan – Unity
Cloud Innovation of the Year
CloudRanger – Server Management platform
IPC Systems - Unigy 360
SaltDNA - Secure Enterprise Communications
StaffConnect - Mobile App Platform
Zerto - ZVR 5.5
Hyper-convergence Innovation of the Year
Pivot3 - Acuity HCI Platform
Schneider Electric - Micro Data Centre Solutions
Syneto - HYPERSeries 3000
Digitalisation Innovation of the Year
Asperitas – Immersed Computing
IGEL - UD Pocket
Loom Systems - AI-powered log analysis platform
MapR – XD
For more information and to vote visit: www.svcawards.com
There was a time Handle Financial thought all that could be done in the cloud was development and QA work. The idea of moving their electronic cash transaction network out of traditional on-premise systems was sort of a pipe dream (pun intended) – the cloud market just wasn’t mature enough.
But as AWS gained traction and performance and security improved, moving to the cloudbecame a real possibility, and a potential competitive differentiator. Fast-forward to today, and Handle Financial is 99.5 percent in the cloud.
Handle Financial has built a payment network that empowers users to pay rent, utility bills, repay loans, buy tickets and to much more with cash, and consumers can make payments on their own schedule at more than 17,000 trusted locations including 7-Eleven and Family Dollar stores across the U.S.
A move to the cloud would make them nimbler; more agile.
Their cloud journey started on AWS theming Elastic Load Balancers. But scaling to such a large volume – they’re running 28 ELBs – became costly and they lost some technical capabilities that they depended on, such as being unable to theme static IPs, which means partners couldn’t white list them. Theming ELBs also forced them to sacrifice SSL client-server authentication. And they couldn’t manage ELBs individually.
Those limitations brought Handle Financial to A10 Networks. They had been an A10 Networks customer previously, and had experienced business advantages. This time round, they wanted to move to the cloud and eliminate the costs and resource constraints that come with managing and maintaining hardware.
And leveraging Harmony Controller and Lightning ADC will help them in the future as well. They recently acquired a bill payment app, which runs in Microsoft Azure. Harmony’s ability to bridge multiple clouds means it can support our AWS environment and this new Azure environment simultaneously. And they can add other clouds, along with traditional on-premise environments as they'll. That’s powerful.
Pioneer in Cloud Data Management to work with team to accelerate the protection of race data.
Mercedes-AMG Petronas Motorsport confirms a new team partnership with Rubrik, specialist in Cloud Data Management.
With data volumes, backup and recovery requirements becoming ever more demanding in Formula One, the team is investing in class-leading technology in order to stay ahead. Specifically, the team will be using a multi-node Rubrik cluster at their Brackley headquarters to protect the team’s critical race data.
The team will also use Rubrik’s REST API (Application Programming Interface) to integrate with their current tools to analyse their data utilisation. With this information, the team expects to become even more efficient in how it manages and utilises the vast volumes of race data.
“We are delighted to welcome Rubrik to Mercedes-AMG Petronas Motorsport,” commented Toto Wolff, Head of Mercedes-Benz Motorsport. “In the fast-moving world of information technology, it’s essential to be right at the forefront, particularly for us in the area of data management, and we look forward to working with Rubrik to maximise our potential in this area.”
“Mercedes-AMG Petronas Motorsport is at the forefront of adopting new technologies within the racing world,” said Bipul Sinha, co-founder and CEO, Rubrik. “Rubrik’s cloud data management platform will enable the team to access and manage critical race information, providing them with a new competitive edge. We are excited to partner with Mercedes-AMG Petronas Motorsport to turbocharge their innovative approach to data management and contribute to their continued success.”
Global publisher uses Datapipe’s global expertise to expand its infrastructure to China and move infrastructure onto public cloud.
BMJ, a global healthcare knowledge provider, has used Datapipe’s expertise to launch its hybrid multi-cloud environment and enter the Chinese market using Alibaba Cloud. The announcement follows last year’s partnership in which BMJ implemented a DevOps culture and virtualised its IT infrastructure with Datapipe’s private cloud.
In 2016, Datapipe announced it was named a global managed service provider (MSP) partner of Alibaba Cloud. Later that year, it was named Asia Pacific Managed Cloud Company of the Year by Frost & Sullivan. BMJ opened a local Beijing office in 2015, and its attraction to engaging Datapipe’s services at the outset was due to Datapipe’s on-the-ground support in China and knowledge of the local Chinese market.
Sharon Cooper, BMJ’s Chief Digital Officer says, “We see the People’s Republic of China as a key part of our growing international network. Therefore, we needed the technical expertise to be able to expand our services into China, and a partner to help us navigate the complex frameworks required to build services there. Datapipe, with its on-the-ground support in China and knowledge of the market has delivered local public-cloud infrastructure utilising Alibaba Cloud.”
Last year, BMJ used Datapipe’s expertise to move to a new, agile way of working. BMJ fully virtualised its infrastructure and automated its release cycle using Datapipe’s private cloud environment. Now, it has implemented a hybrid multi-cloud solution using both AWS and Alibaba Cloud, fully realising the strategy it started working towards two years ago.
“It is exciting to be working with a company that has both a long, distinguished history and is also forward-thinking in embracing the cloud,” said Tony Connor, Head of EMEA Marketing for Datapipe. “Datapipe partnered with Alibaba Cloud last year in order to better support our clients’ global growth both in and out of China. We are delighted to continue to deliver for BMJ, building upon our private cloud foundations, and taking them to China with public cloud infrastructure through our relationship with AliCloud.”
Alex Hooper, Head of Operations, BMJ says: “We have now fully realised the strategy that we first mapped out two years ago, when we started our cloud journey. In the first year, we were able to fully virtualise our infrastructure using Datapipe’s private cloud, and in the process, move to a new, agile way of working. In this second year, we have embraced public cloud and taken our services over to China.”
Alex Hooper, explains further: “Previously, we could only offer stand-alone software products in China, which are delivered on physical media and require quarterly updates to be installed by the end-user. With Datapipe’s help, we now have the capability to offer BMJ’s cloud-based services to Chinese businesses.”
This has been made possible by utilising data centres located in China, using Alibaba Cloud, which links to BMJ’s core infrastructure and gives BMJ all the benefits of public cloud infrastructure, but located within China to satisfy the requirements of the Chinese authorities.
Alex Hooper continues, “With Datapipe’s help, it was surprisingly easy to run our services in China and link them to our core infrastructure here in the UK, Datapipe has done an exemplary job.”
BMJ has seen extraordinary change in its time. Within its recent history, it has transitioned from traditional print media to becoming a digital content provider. With Datapipe’s help it now has the infrastructure and culture in place to cement its position as a premier global digital publisher and educator.
ARM has been selected for the Westconnex road infrastructure project.
Sword Active Risk, a supplier of specialist risk management software and services, has been selected by Sydney Motorway Corporation (SMC) to manage Risk, Audit and Compliance for its Sydney road infrastructure project, WestConnex. The Active Risk Manager (ARM) Software-as-a-Service solution will be delivered via Amazon Web Services and will be rolled-out across the three different disciplines in a phased implementation. ARM was selected after a formal competitive tender process and will replace the current risk management tools within the project.Darren Gustard, Chief Audit and Risk Executive of Sydney Motorway Corporation said; “After a rigorous competitive tender process Active Risk Manager from Sword Active Risk was selected as the best match for our requirements. Additionally we were impressed with ARM’s strong track record for managing risk within globally renowned infrastructure mega-projects.”
Keith Ricketts, Vice President of Marketing at Sword Active Risk commented; “Increasingly we are seeing forward-thinking organizations like Sydney Motorway Corporation moving towards a more integrated and holistic approach to risk management. The ARM SaaS delivery model is enabling organizations to implement in a fraction of the time of on-premise solutions.”
“With fewer operational overheads, reduced IT dependence and a move from capital expenditure to operational expenditure, more and more risk-aware businesses are looking to underpin successful project management, audit and compliance with a SaaS based risk solution that gives a much faster ‘Time to Value’.” Continued, Ricketts.
Active Risk Manager is used in mega-projects around the world including Crossrail, Downer Rail, Northern Gateway Alliance and Thames Tunnel.
Kela is the Finnish government agency responsible for the distribution of social security benefits, including pensions, sickness and housing benefits, and health insurance. Kela manages around 40 different benefits payments and distributes approximately €14 billion to Finnish citizens annually.
Kela’s mainframe, which comprises the core of its IT infrastructure, is one of the oldest andlargest in the Nordic region and has been in use for decades. But while its mainframe has served the organisation well over the years, Kela started to recognise that it could not support its long-term ambitions for two main reasons. The first related to maintenance burden of the mainframe itself, which was becoming increasingly difficult and costly to keep running. The second related to Kela’s digital ambitions, which were being hamstrung by its mainframe.
Markku Suominen, ICT Director at Kela, discusses the challenges of using this technology: “Mainframes are very big and expensive to run, costing us about €8 million per year and we recognised that those costs would only increase. There is now a severe shortage of personnel in Finland that are trained to maintain mainframes, because the technology is now so outdated that students no longer learn how to work with them. At the same time, we are trying develop our online and digital offering to be able to provide best-in-class services to the Finnish public, but found that we were unable to do so due to the inflexible nature of our infrastructure.”
However, while the organisation could have rewritten the apps on its mainframe and develop a completely new environment, this was not a viable option. Markku explains: “The programmes and databases on our mainframes use over 10 million lines of PL/1 code. We do not have the resources to edit this quantity of programming, and we also knew that asking our talented coders to rewrite millions of lines of code would be very demotivating for them. We needed a new environment that would enable us to leverage the skills of our staff and maintain our existing applications, which is why we decided to simply re-host the mainframe with TmaxSoft.”
Big Data, IoT and AI - the holy tech trinity - is powering the exponential boom of technology in today's society. Yet businesses are only just scratching the surface when it comes to getting the most out of this ecosystem. One of the biggest obstacles to wider adoption of this ecosystem is a lack of understanding of what each technology does and how they connect to make a bigger picture.
By Chris Proctor, CEO of Oneserve.
Take Big Data and IoT. Thanks to the rise of the Internet of Things (IoT), an abundance of data from every possible industry imaginable is available to be harvested and analysed. According to data software giant SAS, in 2012 the amount of big data stored across the world exceeded 2.8 zetabytes; this is predicted to be 50 times larger by 2020. However, globally, we only analyse approximately 0.5% of this data. Getting the most out of data remains the biggest challenge facing business leaders today.
We’ve toyed with the potential of AI for decades but only recently have the possibilities of its real-world application been realised. Deep Machine Learning, the most advanced form of AI currently, has been the key to analysing the huge amounts of data provided by the IoT and applying it to practical tasks. From filtering our email inbox, to route navigation software, and even mastering hugely complex board games, AI is the final piece of the jigsaw, uniting these three exciting technology innovations.
Another reason why businesses are being timid with integrating IoT, AI and Big Data technology into their companies is because of initial outlay. Investing in items such as RFID enabled sensors, mobile hardware and a full IoT data analytics suite can be costly. However, they have the potential to bring significant cost savings in the long-term, offering a highly effective ROI solution.
For example, an entire business can be connected from the production line to the field and with AI enabled analysis of its data, new insight becomes available which can be used to transform processes. From here the possibilities are endless.
With Big Data, IoT and AI working seamlessly together, solutions to asset management, remote monitoring, predictive maintenance, customer insight, workflow and a wealth of other processes key to any business can be gained at an unprecedented level for the first time.
Within manufacturing, this could translate to having sensors on machinery that send data to the central AI system which in turn could be able to predict when the machine is about to break. This allows maintenance to be predictive rather than reactive. With reactive maintenance, emergency repairs can cost 3 to 9 times more than planned repairs. The cost of shipping spare parts and machine down-time during production can result in down time costs of up to £18,000 per machine in some work environments. Further, full data analytics can inform actionable KPIs and event data capture, reducing problem solving time and even the possibility of automating processes.
This is a reality right now. Rolls Royce, utilising Microsoft’s Azure IoT Suite, is now using smart sensors installed in their jet engines to enable engineers to get immediate operating data. Maintenance expenditure in aviation is a major bugbear, with engines being the highest cost item. The fuel cost, per hour, for one jet engine on a Boeing 747-400 during a transatlantic flight in 2015, was as much as $3,500 per hour. Through intelligent use of AI enabled software that uses big data to control engine management and maintenance hardware, they are able to reduce the cost associated with ground maintenance and disruption significantly.
Detailed and personalised insight into customers is another major advantage of harnessing these three technologies. With the democratisation of technology and increased competition, customer experience is vital today more than ever - bad customer experience is no longer excusable. Now, businesses no longer need to guess when targeting consumers or when marketing.
Fitness First are using wearable technology combined with Apple’s iBeacon protocol to enable targeted fitness related content to be sent to customers when they enter the gym. This could be fitness regimes tailored to the individual based on data gathered from their wearable, or any content to help them with their personal fitness. This level of personalisation and detailed insight, which improves customer satisfaction, is possible only through the combination of IoT, AI and big data.
Here is indeed, the holy trinity – large amounts of data, that can now be analysed in real-time and applied practically, and through devices permanently tuned into the data. Oh, did I also mention that scalability is built-in?
Smart Cities sound like a great idea -- on paper. But the problem is that most of the people working on creating them are focusing on individual elements of city problems and developing technology-based solutions to address them.
By Megan Goodwin, Joint Managing Director, IRM.
But cities are human hives, not ‘machines for living’ (to paraphrase Le Corbusier). People have to live in them and this careful balance between increased urbanisation (United Nations have warned that 65% of humans will live in a city by 2050) and the social isolation that technology can create means that any design for a Smart City that fails to take into account human behaviour is doomed to fail.
Every conference I’ve attended over the last few months has centred on the technological infrastructure of Smart Cities: sustainability, CO2 emissions, congestion, parking, driverless cars, city lights, energy, data integration, and so on and so forth.
While, of course, they were all very interesting and valid discussions delivered by people who are experts in their field, I can’t help but feel that these meetings are missing a key factor -- the human/psychological element of Smart Cities. After all, the whole point behind Smart Cities must be about creating an efficient, sustainable, healthy space where people are happy and involved in their city because their environment is only going to become more and more densely populated. The density is not what causes the problem, according to the UN, but the planning of infrastructure.
Cities are getting bigger. The United Nations predicts that by 2030, 662 cities around the world will have at least one million residents, up from 512 in 2016. But as cities grow, so does the tendency for people to become more socially isolated.
Technology, which should be a facilitator of communications and community, is compounding the problem for far too many -- social media, for example, can increase and reinforce feelings of worthlessness, cyberbullying is on the rise and ‘fake news’ being transmitted on a global scale is undermining people’s trust in big brands, organisations and government bodies.
Are we surprised? We don’t pop down to the high street to do our shopping anymore -- we shop online, pay digitally and have it delivered… It was bad enough when the car became ubiquitous and we all drove everywhere -- but at least we left the house. Smart Cities mustn’t be allowed to create a nation of ‘shut ins’.
Instead, technology should be incorporated into city design in a way that makes it easier for us to tackle issues such as loneliness, mental health problems, personal security, eating disorders and drug addiction.
It might be less sexy as a political headline, but how can we use Big Data, technology and start-ups to address these problems should be a key question in any plans for a Smart City.
Let’s use technology to make people happier, encourage them to become healthier and create social structures that support those who need support. Technology should be about transforming people’s lives for the better, not reducing them to bytes of data in the interests of ‘efficiency’.
Smart city advisory committees are made up of everyone from anthropologists to developers, accountants to designers, but where are the psychologists? Where are the poets? Where are the game makers and the game changers? Where does the human angle and understanding coming from?
Technology is literally an enabler; it can do whatever you want it to do. So why aren’t cities using it as a way to get people to connect and to take positive steps to improve their lives?
There is some hope for the future of Smart Cities, though: Millennials have a totally different attitude to life than the generations that came before them. These citizens value sharing, spontaneity, meaningful experiences and collaboration. And that’s exactly why ‘sharing economy’ pioneers like Airbnb and Uber are leading the charge.
Smart technology needs to be about connecting people - bringing people together and giving them a feeling of community. A giant leap towards this has been taken by AccorHotel with the launch of its Jo&Joe brand - disrupting the traditional hotel format with more flexible space and designing community hubs for both local residents (dubbed “Townsters”) as well as hotel guests/travellers (“Tripsters”) so encouraging integration which appeals to the Millennial generation.
Getting people involved in the vision of the smart city is key. UN-Habitat is already making waves in this area through its Block by Block project with Minecraft, which uses the world-building computer game as a participation tool for local communities to design their own public spaces. What’s great about this is that after building projects in Minecraft, presentations are put forward to stakeholders from local government, the mayor’s office, planners and architects for future urban design.
Having worked in the games industry myself for more than 15 years, I’ve seen first hand how technology can be used as an enabler to bring people together, as well as isolate them. For Smart Cities to be truly “smart”, the integration of human needs is a fundamental requirement for any Smart City planning.
The digitisation of business has put IT departments under pressure - and it’s continuing to rise. In the past few years, the Internet of Things (IoT), big data, and mobile applications have all gone through a huge growth spurt. Coupled with dynamic business needs and user requirements for fast, anytime, anywhere access, the demands on legacy data centres are at an all-time high.
The pace of technology is speeding up. The ability to deliver applications faster while building a solid cloud strategy is no longer a nice-to-have, but a must-have for modern businesses looking to stay competitive while meeting customer needs.
The fast pace of digital business spurred organisations around the world to embrace compute virtualisation, which has transformed the data centre over the past decade.
But that transformation isn’t complete. Many IT teams still rely on hardware-centric approaches to storage and networking, which are expensive and time-consuming to manage and maintain, and don’t provide the flexibility and agility that today’s users demand.
In an era where speed and performance are critical, organizations need a more agile and flexible approach to IT infrastructure if they want to keep pace.
According to Gartner, the hyper-converged integrated system (HCIS) market is projected to surpass $10 billion in revenue by 2021, with a compound annual growth rate of 48% (from 2016 through 2021).
Hyper-converged infrastructure (HCI) collapses compute, management, and storage onto industry-standard x86 servers, enabling a building-block approach to the Software-Defined Data Centre (SDDC). With HCI, all key data centre functions run as software on the hypervisor in a tightly integrated software layer.
HCI makes it easy to transition from physical storage solutions into virtualised storage and realise big improvements fast.
As Duncan Epping, Chief Technologist, VMware – Storage & Availability, explains: “Virtualisation of compute has brought customers agility and flexibility in terms of deploying new workloads. However, when a change is required from a networking or storage perspective, a multiple week change request is not uncommon. This is where network and storage virtualization comes in to play. The idea is to provide similar agility and flexibility to the network and storage layer as was introduced for compute over a decade ago.”
As businesses address the need to create a more agile, dynamic data centre environment, they need to consider a number of important changes, including a spike in all-Flash adoption, a continuing shift towards DevOps, the maturing of the ‘Software-Defined Everything era, cross-Cloud expansion and the development of increasingly advanced automation solutions.
Leading hyper-converged infrastructure solutions are designed on a tightly integrated software stack, which benefits IT organizations with common workflows and increased automation that helps reduce operational tasks and improves responsiveness to these demands.
Duncan explains: “At VMware we took a different approach to HCI than most other vendors - for us storage became an integral part of the compute layer. Embedded in the hypervisor, a natural extension of what virtualisation administrators were already familiar with: vSphere. This is also where in my opinion the solution stands or falls. Simply adding a storage appliance on top of a hypervisor does not necessarily reduce complexity or provide flexibility.”
Duncan continues: “The implementation of the specific HCI solution can be a pitfall. How is the solution managed? Do you have a separate management interface for the storage component and the hypervisor? Are you still creating NFS mounts or iSCSI targets? HCI is about collapsing layers, converging different components in to one. This also imposes some challenges however from an organizational perspective. Who owns which part of the HCI stack? An important question, as the traditional IT team org chart (compute, storage, networking) most likely will not make sense in a hyper-converged world.”
Duncan adds: “Hyper-Converged Infrastructure is the building block for the Software Defined Data Centre. It provides flexibility, agility and speed at a lower cost both from a capital expenditure and operational expenditure point of view. Especially in a world where “time to market” is crucial for new products and solutions to stay ahead of the competition, having a flexible architecture which allows you to scale up and out at any time is important.”
Ready or not, changes are coming to your data centre. If you want to stay ahead, it’s important to start preparing right now. The right HCI solution will not only help you modernize your data centre and become more agile - it will also help you respond to new DevOps needs and lay a path to the cloud.
For example, when Discovery, a shared value insurance company and authorised financial services provider, based in South Africa, discovered that its infrastructure dependencies were affecting the stability of its VMware environment, it turned to VMware’s HCI solution powered by VMware vSAN to alleviate these dependencies and better architect its virtualised environment.
“After encountering a series of infrastructure outages that we identified were due to hardware instability, we embarked on a process to identify and find better technologies to help improve our physical and virtual infrastructure stack,” states Johan Marais, Virtualisation Manager at Discovery.
“What we uncovered was that, when there was instability across the traditional server, storage or SAN environment our VMware ecosystem would simply not be available. When this happened we firstly had no control, and secondly it was difficult to determine the root cause of problems due to the complexity of the integrated environments managed within the respective silos,” he adds.
Today 50% of the head office environment has been migrated, and Discovery’s US and UK operations are now 100% operational on vSAN. The South African environment will be migrated over a three-year period due to the size of the SAN footprint and the financial impact of changing equipment before the end of its lease period.
When Discovery’s Corporate IT discovered that its infrastructure dependencies were affecting the stability of its VMware environment, it turned to VMware’s vSAN solution to alleviate these dependencies and better architect its virtualized environment.
We’ve discovered how HCI solutions can make your IT organization more agile and responsive, helping you succeed in the digital era - but not all HCI solutions are created equal.
Look for these four characteristics:
A proven hypervisor. The hypervisor is the “hyper” in hyper-converged, and it runs all key data centre functions - compute, storage, storage networking, and management - as software. The result is more efficient operations, streamlined and speedy provisioning, and cost-effective growth.
Software-defined storage. In a good hyper-converged solution, storage and storage networking are collapsed into the server and virtualised. This streamlines operations, costs, and overall physical footprint.
A unified management platform. A unified platform that allows you to manage the entire software stack from one interface and seamlessly integrates all your workflows is a key element of a good HCI solution.
Flexible deployment choices. The ability to use low-cost industry-standard hardware is a huge advantage of HCI. With it, you no longer have to deploy expensive servers, storage networking, and external storage solutions. With an HCI platform that gives you a broad choice of hardware options, you can build an environment that matches your needs and preferences.
One of the advantages of deploying a vSAN solution is that it speeds up the time required to implement an HCI solution. As Duncan explains: “vSAN enables customers to design and deploy their infrastructure in a building block fashion. This means that storage and compute capacity can simply be added when required by adding new servers to the existing cluster. “
Duncan continues: “These servers can be, what we refer to as, vSAN Ready Node servers, or even any server on the vSphere compatibility guide with vSAN certified components. This allows customers to avoid unnecessary operational changes and challenges when it comes to deployment of hardware and firmware management, as existing management and automation tools can be leveraged.”
A further advantage of the vSAN solution is the speed of time it takes to deploy applications on the hyper-converged infrastructure. “With vSAN, customers have the ability to select components from a broad (vSAN) compatibility guide,” says Duncan. “VMware is the first in the market to support new technologies like NVMe and Intel Optane - allowing for architectures that provide high uptime and a great user experience (low latency, high IOPS). This results in fast deployment times of new applications and great, but more important consistent, performance for existing applications running on vSAN.”
And, of course, for end users already familiar with VMware’s vSphere technology, it’s relatively easy to get to grips with the vSAN product.
Finally, VMware HCI powered by vSAN offers the potential for significant capital investment savings. By leveraging standard x86 hardware instead of traditional dedicated proprietary storage hardware, acquisition cost savings of up to 60% are not uncommon. vSAN shares the same platform as vSphere, ultimately driving efficiency of resources.
For Discovery, implementation of the VMware’s vSAN solution has provided plenty of benefits.
“The vSAN platform has given us greater control of our infrastructure stack as well as improved management of the VMware environment. In addition, it has also provided greater performance and simplification of the SAN infrastructure, afforded us freedom when scaling the infrastructure as well as flexibility in our choice of hardware,” states Marais.
Marais continues: “vSAN has given us fantastic visibility into our storage IO patterns for every application and this management function now sits in the virtualisation team. What also makes a huge difference is that we are experiencing space savings because vSAN is thin provisioned; this assists us greatly in keeping the storage footprint smaller as well as reducing the unit cost of supplying a virtual server.”
Right now, HCI is a priority for nearly every IT organization because of the imperative to deliver “digital first” strategies that drive competitive advantage. Companies that want to win the fight for customers know they need tools that make them more agile, cost effective, and efficient. HCI delivers on each of these attributes and provides a valuable new tool for the IT arsenal.
Learn how VMware vSAN and Intel partner for a complete HCI solution.
DCS talks to FibreFab Marketing Manager, Gary Mitchell – covering the company’s extensive product portfolio, its successful development to date, and plans for the future.
1. Please can you provide a brief introduction to FibreFab – how long the company’s been around, its technology focus and the like?
FibreFab was founded in Milton Keynes, UK way back in 1992. We remained the industries best kept secret, OEM manufacturing for some of the biggest brands in the market place and providing many distributors with plain label or own brand fibre and copper connectivity products. Then in 2013, AFL, a division of Fujikura, acquired FibreFab and quickly realised that our engineering capabilities, solution set, supply chain, customer service and commercial relations were destined for far greater things.
Now, our vision is to be an innovative connectivity solutions company selected by Partners worldwide. We are embarking on a global mission to bring performance, innovation, fast-turnaround and real value (economic, and operational) to the market through dependable distribution and installation partners, to our customers across the globe.
2. And who are the key personnel involved?
Collaboration is one of our core values and we are firm believers that exceptional ideas stem from shared social capital between us, our partners and our customers.
FibreFab is a big family, and our distribution and installation partners are an extension of us. It is clichéd to say, but truly our people and our partners are fundamental to the success of our business.
3. And what have been the major milestones to date?
2017 – Release of FibreFab MPO connector
2013 – AFL acquires FibreFab
2009 – Setup FibreFab sales office in China to service APAC regions
2007 – Setup FibreFab sales office in the Dubai to service Middle East
2006 – Acquires UK fibre termination facility to enable fast-turnaround multi-fibre assemblies into UK and Europe
1992 – FibreFab established
4. Please can you give us a brief product portfolio overview?
We offer end to end fibre and copper network solutions. Our product portfolio consists of assemblies with single or multiple connectors, bulk cable, protection and management, as well as tools, test and termination equipment across fibre and copper.
5. In more detail, can you talk us through the fibre product line?
Working alongside our family of companies we are one of the only companies that can really offer an end to end fibre network solution. Starting with Fujikura (our Grandparent company) we have access to their world-leading fusion splicers – one of their latest innovations is the Fujikura 70R – this fusion splicer can splice 12 fibres at a time. You also have the Fujikura Wrapping Tube Cable with SpiderWeb Ribbon - that is a real game changer. Some data centres are redesigning their fibre backbone and OSP cable specs on the back of this cable. It is available from 144 fibres all the way to 3456 fibres, and generally speaking has a smaller cable diameter by 30-40%, and about 30% lighter than similar fibre counts in other cable constructions. The fibres are intermittently bonded in 12 fibre ribbons so you can multi-fusion splice, or separate one of the fibres and individually splice. One of our customers saw this cable and asked us to design a splice cabinet that would fit on a 600mm x 600m floor tile and hold 10,368 fibres. So we did it, it’s easy to use, easy to install, easy to manage, the customer told reported back that they had about an 80% reduction in time from single fibre splicing.
AFL our parent company –are well known for their test and measurement equipment, fibre cable, and network cleaning products. It is great for us to leverage their solutions and innovations and include it as part of what we offer.
Within FibreFab, we design and manufacture every manner of fibre assemblies including MPO trunks, QSFP assemblies, multi-fibre pre-terms, ultra-high density modules, patch leads and pigtails.
Fibre Management products include 144 fibre Ultra High Density (UHD) Panels, chassis for UHD modules (capable of 288 fibres in 2U), BASE8 transition modules, ODFs, racks, cabinets, sliding panels, pivot Panels, OSP enclosures, and a whole array of cable management products.
Then we have bulk cable, so we have CPR Rated loose tube, tight buffered and steel tape armour cable, as well as access to Fujikura SpiderWeb® Ribbon Cable.
It is fair to say we have a very broad fibre solution offering, ensuring we can put together a custom network solution that meet our customers requirements! Also splice protectors, can’t forget about the splice protectors.
6. And you also manufacture a comparable copper product line?
Our copper range covers high performance CAT5e, CAT6 and CAT6a, and we have everything you would expect including bulk cable, keystone jacks, patch cords, pre-terminated assemblies, and every manner of copper panel – but then we also offer CleanPatch which places 12 copper patchcords into one module which you can connect easily into the front of any patch panel or blade server. It’s a really easy, useful system and makes large copper installations even quicker and easier.
7. Do you have any thoughts on how fibre and copper solutions will develop in the coming years, bearing in mind the demand for every increasing ‘feeds and speeds’?
In terms of technology and bandwidth we believe copper is more or less at its final destination – that said there is an ongoing extension of applications for twisted pair copper in the enterprise space. With POE, POAP, VOIP and connected security devices.
In the data centre space, all main working connections are converting to higher bandwidth with all equipment now shipping with pluggable interfaces. Short range connections inside the rack continue to be made with Direct Attach Copper (DAC) - a high performance variant that does not use keystone jacks, instead opting for transceiver plugs. Active Optical Cables (AOCs) are being utilised widely for mid-range and pluggable transceivers for long range. As bandwidth demand increases over the next 5 years, DAC will become far more restricted in its applications, with AOC picking up in its space.
In mega dcs/cloud data centres most connectivity is already on single mode fibre, whilst enterprise, edge and some mid-scale dc networks use multimode. Multimode we believe will have one more generation before it slowly becomes redundant in future networks. OM5 has a very limited shelf life and is almost already irrelevant. Our advice, use single mode, install single mode and your network infrastructure will have the capability to reach 400gb/s and beyond, in fact single mode cabling will be able to support 1.2tb/s at short range.
8. FibreFab offers a variety of optical networking solutions, starting with the Optronics line?
OPTRONICS® is the name of the network system that we manufacture. Application wise we have network systems available for data centres, telecom networks and enterprise.
Through OPTRONICS®, we design and manufacture pioneering network connectivity solutions that maximize space, expand capacity, deploy quickly, migrate easily and offer fantastic economy. OPTRONICS® is a comprehensive platform of fibre and copper connectivity solutions aimed at enabling network evolution better than anybody else.
9. And the company also offers UHD solutions?
UHD stands for Ultra High Density, and it is really our jewel in the crown. At the heart of the OPTRONICS® UHD solution is our 1U, 2U and ZeroU chassis. Each designed to accept the UHD Modules or UHD Adaptor Plates. The modules are easy to install and allow your network to pay as you grow. UHD Modules can be connected by MPO, direct termination (LC or SC) or by splicing (LC or SC). Across a 2U chassis, fully populated with UHD modules, which would enable access to 288 LC ports.
The OPTRONICS® UHD Solution can also interconnect with our host of customizable, high performance MPO, multi-fibre pre-terms and range of high performance patch cords.
10. And hyperscale solutions?
FibreFab is a member of the AFL IG Group. The Hyperscale (AKA Cloud) computing providers are a major focus of the Group. We provide engineered and customised optical fibre cabling and bespoke fibre management solutions in high volume to this segment.
11. And what can you tell us about the pre-terminated solutions?
Pre-terminated solutions for many are why FibreFab is known and loved. You would be hard pushed to find a better pre-term on the market. From 4 fibre to 144 fibre we have large volume and focussed manufacturing capabilities around the globe. Our customers can determine tail length, tail labelling, connectors, end to end length, and we make the process easy and convenient. We have a patented break out module used on some assemblies meaning that ruggedized tail lengths can withstand 1000N of pulling force – all the while we just created our own MPO connector using Fujikura’s latest MT ferrule, creating the most optimised MPO connector on the market. Our preterms assure quality, flexibility and performance above all else.
12. Not forgetting FibreFab’s ability to provide custom solutions?
Network infrastructure when done right is so much more than just off the shelf solutions. Our engineering expertise is a key differentiator in our market approach and a significant factor in our commercial success. We work with our customers to identify network architecture issues they are unsure how to alleviate and through a process of brainstorming and rapid prototyping, create innovative solutions to sometimes very complex scenarios…and then sometimes we just bend some metal and insert some screw holes…but in a way that our customer needs us to.
13. And then there are the future solutions?
Future solutions is more an outlet for us to explore what next generation architecture might look like by keeping a close eye on key market drivers and technology requirements. This is more a research and development domain for us that allows us to explore how the demands of industry, technology and society are going to impact communications. This is a domain for our engineers and creatives to let their minds run riot and get quite philosophical about optical fibre.
14. You’ve already shared some thoughts on how the market might develop in terms of the general demand for faster and faster networks. More specifically, how do you see Cloud and edge impacting on your market sector?
Cloud computing will continue to grow dramatically and require more and higher bandwidth connectivity between and inside mega data centres. Cloud data centres have already converted to single mode fabric cabling which will support several future technology generations. Cloud connectivity inside colocation data centres and hybrid Cloud implementations will facilitate strong growth in this sector.
The primary use cases driving Edge compute forecasts are IoT and autonomous vehicles with 5G networks supporting much higher data transport requirements. Some form of Edge compute evolution will occur with Cloud and Telco players competing strongly for the emerging business. It is too early to predict the detailed shape and scale of Edge computing but we do know that some form of Edge computing will evolve and that there will be lots of fibre involved.
15. And do you see IoT making a big impact on FibreFab’s customers over time?
I’m sure our Hyperscale, Telco and Colocation customers will all compete for the emerging IoT data transport and compute business.
16. In terms of FibreFab’s market presence, where is the company right now?
We had commercial activity in over 90 countries last year and our market presence is always growing. We know where we want to be and we recognise we have a way to go.
17. And where would you like to be in, say, two years’ time in terms of geographical coverage and, say, specific industry sector success?
We already have global coverage, for now we are happy to spend our efforts maximising the opportunities in the countries we are already operating in.
In relation to industry sector success, we have a vision and we have a plan of action to achieve it. When you think fibre optic networks we want the market to think FibreFab. We have made a profound impact in the hyperscale and dc space already so those that are close to us, are already aware of our capabilities.
18. How do you believe that FibreFab differentiates itself in what is quite a crowded optical networking marketplace?
A responsive, customer centred service, our ability to act fast, and our willingness to engage in custom design to solve problems as opposed to just off the shelf solutions.
19. And, without giving away any secrets(!), how do you see FibreFab developing to stay relevant/keep ahead of the market in the next year or so?
We will continue to be the disruptor in the market and challenge the status-quo created by legacy brands. Then enter new markets, release new product offerings and continue to shake up and challenge the market.
20. What are the one or two pieces of advice you would give to end users who are seeking to understand their optical networking options as they try to keep on top of the demands of the digital world?
Firstly, pay attention to and think about how best to use the lessons learned by the Cloud computing players – software defined everything, disaggregation of hardware and software, resilience not redundancy and provisioning more than sufficient bandwidth.
Secondly, focus on fundamentals and don’t let brand power manage your decisions. Be open to information and recommendations from diverse sources. Then call FibreFab.
21. Any other comments?
Thank you for your questions!
One of the key findings of a recent PWC report found that the buzz around Industry 4.0 has moved on from what some saw as marketing hype in 2013 to investment and real results today. But preparing for a digital future is no easy task. It means developing digital capabilities in which a company’s processes, people, culture, and structure are all aligned toward a set of organizational goals. And for most companies the ultimate aim is growth.
By Terri Hiskey, Vice President, Product Marketing, Manufacturing at Epicor Software Corporation.
In manufacturing, the challenges for companies are multifarious because digital transformation impacts every aspect of operations and the supply chain, from equipment and product design, through to production processes, logistics and service. Industry 4.0 trends now require new, more sophisticated levels of collaboration across geographies on everything from product roadmaps and engineering specifications to production line management and information about parts.
The PWC study found that companies expect to significantly increase their portfolios of digital products by 2020. In addition, our own research has highlighted the importance businesses are placing on digital transformation today, with two-in-five industry professionals agreeing that digital transformation will offer them strong opportunities for growth in the future.
Whatever manufacturing model they are working to, such as engineering-to-order (ETO), make-to-order (MTO) or make-to-stock (MTS), manufacturers are increasingly working with an ever-broadening range of software systems. The challenge is to ensure that software integrates properly and data is shared effectively across all systems.
Another challenge is that migrating to new solutions can incur heavy costs. Many manufacturers have existing legacy systems and technologies that do not provide the functionality, integration and upgrade capabilities required to become a truly digital organization. These organizations should be prioritising what technology is needed at different points on their digital transformation journey.
The reality is that digital transformation is an ongoing process for most companies–they are required to continuously assess when and how fast to migrate their technology. For some this means struggling with digital debt that can restrict the potential of the business for growth.
Digital debt embodies itself in IT cost burdens, due to decisions taken on legacy technology years ago, but also in terms of unsupported old releases, or isolated systems that may hold a business back from its potential–in fact, in a recent report, Forrester defines digital debt as: ‘the opportunity cost resulting from retaining technologies, systems, and processes that constrain a firm’s ability to become a digital business.’
To avoid being constricted under the strains of digital debt, manufacturing businesses must adopt technology that’s customer-led and insight driven. Forrester recommends three steps for manufacturers to migrate from legacy systems in support of a customer-centric operating model.
1. Identify migration priorities
The first step is to understand the role of digital transformation and how it can help your manufacturing business grow. The best way to do this is to analyze the current state of all systems, from R&D, procurement, production, warehousing, logistics, and marketing to sales and service. Assess these systems for their ability to put the customer at the centre of business operations—for example, do they allow for customer engagement? Do they help a customer achieve what they want to achieve? Businesses should also assess systems for their ability to provide and act on insights, and work in real-time, whilst connecting with other areas of the business.
Crucially, however, make sure you assess the capability of technology against your business goals. Once this assessment has been done, manufacturers can identify migration priorities in their journey towards digital transformation.
2. Estimate costs one pain-point at a time
It’s important that organisations don’t try to radically overhaul all of their systems at once. A report by Accenture suggests that for organisations to shed systems and behaviours that are relics of the past, they need to establish spending priorities based on what will yield the best returns, and help them keep pace with growth objectives.
Invest in technologies that add strategic business value and develop organizational capabilities that are aligned to your business goals. For example, investing in cloud can expand collaboration along the value chain. For some systems a complete and immediate replacement might be the best option. But for others, such as for applications that work well but simply look a bit out-dated, a full replacement might not be necessary straight away. Here, an update might be a more cost effective and immediate solution.
3. Map the remediation journey
As Forrester recommends, manufacturers need to ‘build your migration road map consistent with your migration urgency – your digital debt.’ Those that attempt to retain complex, disconnected, networks of legacy applications and systems will limit their ability to put customers first, constrain their digital transformation and restrict business growth. That’s why it is so important for organisations to set out clear goals for legacy migration and constantly re-evaluate their progress.
Turn insight into action and empower your business with the right people, processes and culture to foster change. Digital transformation has already had a profound impact on the manufacturing industry and it shows no sign of slowing down. Remaining competitive means being able to put customers first, and putting customers first requires manufacturers to grow and work in the digital landscape.
The digital model fuels key competitive differentiators, including the ability to extend transactions into experiences, to connect with customers and suppliers anytime and anywhere, and to translate data into strategic insights. Manufacturers that embrace digital transformation now are set to pave the way for business growth tomorrow.
Currently, around 23 billion ‘things’ are connected to the world’s numerous communication networks and more are joining at a rapid speed. For consumers, these include everything from connected fridges to watches. The general hunger for both novelty and innovation is swiftly increasing as industry goliaths continue to release new products into the market.
By Richard Smith, regional manager at SOTI.
Though machine-type communications are set to usher in the fourth industrial revolution, it is predicted at least 65 per cent of businesses will adopt a mass of connected devices by 2020 - more than twice the current rate. Manufacturers, logistics firms and retailers will be the first movers in this ‘internet of things’ (IoT) revolution, as they look to connect and automate process-driven functions.
The outcome of adopting connected devices is beneficial for every business, as they see investment in prominent technologies such as mobile devices key to ensuring they can serve their staff and customers, and bring them a greater understanding of consumers.
However great the perceived benefits are, connected devices bring new business challenges around scale, interoperability, security and the management of devices and endpoints. Starting at the coal-face, for those of us who rely on mobile devices for work purposes, the emotional fallout can be hard – recent research revealed 59 per cent admit to being stressed by technology and 29 per cent even voiced fears of losing their jobs due to technological failures. The stress of technical failures is 13 per cent higher for business owners, with 72 per cent concerned over the potential cost of data loss.
To ride the tech wave, enterprises must have a clear strategy for mobility management. It is essential to cover traditional devices and non-traditional ‘things’, such as connected cars; taking in technical issues like interoperability, security and more straightforward ones like filtering vast new oceans of data and what to save in the catch. Without a strategy in place, companies will find themselves throwing infinite resources into connecting everything to the internet, rather than just those that are crucial. So, what do businesses need to streamline mobile and IoT device management and harness the potential opportunities?
STEP 1: INTEGRATION
It seems device management is the most challenging task facing the market, as 45 per cent of companies are failing to enforce restrictions such as blocking apps. At a basic level, connected devices must be properly coordinated if businesses are to easily access and manipulate the data available to them, regardless of its origin. Using an integrated suite of mobility solutions offers a clever, quick, and reliable way for businesses to build their apps faster and manage their mobile devices and IoT endpoints.
Additionally, a closely integrated device and IoT management system can bring added benefits to companies seeking to bring order to the rising confusion of IoT connectivity. Businesses must recognise what can be achieved through IoT, not just by creating “smart” devices, but by providing business intelligence and improving productivity, cutting costs and improving the customer experience. Refined mobility management solutions give real-time insights into remote device performance, which can be tapped into by help-desk teams to run device diagnostics, solve technical issues and maintain staff productivity.
Likewise, the most cutting edge device and IoT management solutions cover rapid cross-platform app development, so businesses can deploy enterprise applications for their own specific devices in a fraction of the time. Ultimately, if network inter-play must be solved by the technology industry at large, the working integration of connected devices is the responsibility of leadership teams and IT departments within enterprises themselves.
STEP 2: SECURITY
The recent WannaCry ransomware attack, which impacted 200,000 computers globally, makes it all too clear that this dynamism makes us vulnerable. By 2020 it is estimated that the number of connected devices will be 30 billion, but with each new device comes a new way for criminals to access the system.
Undoubtedly, mobile IoT devices must be secured and maintained properly, but while governments and industry bodies work out the detail to increase minimum security-levels, it is essential that enterprises consider their own network, device and data security. New devices should have the right security certifications but much more can be done to support devices and data.
Companies should expect device management solutions to enforce authentication, including biometric and two-factor authentication, in order to stop unauthorised access to valuable company data and documents. They should also expect full device storage encryption to ensure sensitive company information present on mobile devices in the field is as secure as data on an office-based workstation.
In case they are lost or stolen, IoT devices should also be trackable and wipeable in while the wireless access and the network connection must remain constantly private and secure.
STEP 3: SIMPLICITY
Approximately 90 per cent of all data has been created in the past two years. The sheer volume of data available to us is over-whelming and intellectually crippling if it is not understood and processed swiftly.
Likewise, companies must efficiently filter and understand the data they capture. Businesses should take deliberate stock with specialist data analysts and mobility management providers, and evaluate the types of data they have – looking at the insights they can gain, and how these will distinguish them.
It also requires experimenting; the process to insight and differentiation is iterative. It is foolish to jump into this sea of data and try to swim; it is far better to build a vessel on dry land, test it in the shallows, and then to guide it towards new horizons. Once the boat has been constructed and set afloat, the main navigation can be automated with periodic check-ups to master the course.
Human input is crucial from the beginning and throughout, but the most recent data analytics and machine learning engines can lighten the load – especially as the sea widens with the flood of new data from new ‘things’.
For businesses entering uncharted waters, it is vital to not only ‘think big’ but also to retain extremely close attention to detail. Their approaches need to be right for their strategy and market. Trying to achieve too much at once can end up being counterproductive; the real value from IoT lies in doing the smaller things well and building on that. Companies which refuse to take these precautionary measures will find themselves drowning in data. By focusing on integration, device management and interpreting data, businesses can avoid falling adrift and ride the wave of success.
Dealing with the dark side of the digital world – cyber war by stealth.
By Paul Darby, Regional Manager, EMEA at Vidder.
Security attacks have become commonplace in recent years, with the number of breaches rising exponentially and the nature of the events evolving over time. DDoS attacks, for example, dropped in number by 30% over the past year, according to the Akamai State of the Internet report for Q1 2017, but web application attacks rose 35% over the same period. Verizon tells us in its annual Data Breach Investigations Report that around 73% of breaches are financially motivated, carried out by criminals in and out of the victim organisation who are looking for ways to make money. Equally, they say that 21% of breaches relate to cyber-espionage.
While companies worry about the threat of ransomware, prompted by the recent WannaCry attack on the NHS, or data theft like that experienced by Payday loan company Wonga earlier this year, there is another worrying trend to add to the list – politically motivated, or even state-sponsored cyber-attacks.
What we are witnessing is the dark side of the digital world – cyber war by stealth.
In the US, the Presidential election campaign was famously affected, with leaked communication from hacked servers and attempts at voting machine manipulation. The new French President, Emmanuel Macron was also targeted by a coordinated hacking attack which saw thousands of internal emails and other documents released in an attempt to destabilise the vote. A report in Wired later said that despite clues pointing to Russian involvement the evidence was inconclusive.
Western media tends to get worked up about the Russian threat, but it’s worth bearing in mind that the US is the top source of web application attacks alone according to Akamai and Russia comes in below the UK in eighth place. Edward Snowden’s claim that the US’s National Security Agency has masterminded a huge hacking operation aimed at threatening not just organisations but entire countries, is still reverberating around the world. He has also leaked documents alleging that GCHQ in the UK worked with the NSA in the US to run hacking attacks against Google and Yahoo. Meanwhile, China, Iran, North Korea and Israel also stand accused of nation-sponsored hacking and launching DDoS attacks to their own ends.
The problem with fighting a cyber war rather than a conventional war is that it’s not always clear who the enemy is. Attribution is tricky because if cyber criminals want to maintain their personal anonymity, they can use in-country assets to hide behind and guard their locations. Stealth is the watchword for those keen to avoid being held accountable, and the massive changes being wrought by the digital revolution are facilitating this stealthy approach. But the difficulty of attributing attacks to individuals, or even to Governments, should not distract us from the seriousness of the assaults or the financial and operational damage being caused to organisations. Degrading the trust institutions and economies that are needed for civilisation to function simply adds fuel to the flames of the cyber war.
As nation states become more active in the cyber black market, the lines between ‘hackonomics’ – the buying and selling of hacked or stolen data for profit or political gain – and nation-sponsored cyber wars get fuzzier, in what is already a complex and blurred landscape.
This is particularly difficult for democracies with competing interests, or long-held animosities. Let’s take the Ukraine as an example. Over the past three years the country has suffered a sustained and highly damaging series of cyberattacks. The most notable took down the electricity grid just before Christmas 2015, and exactly a year later in 2016, but these have just been part of a campaign that has seen the hacking of media, finance, transport, military and political targets, eliminating data and destroying computers.
If it’s possible for this to be further ramped up, in May the Ukraine accused Russia of deploying an organised cyberattack on the website of the country’s President Petro Poroshenko. A BBC report said that this was in response to Kiev’s decision to impose sanctions against several influential social media networks in Russia. Poroshenko is an outspoken critic of Russia, claiming publicly in December last year that there had been 6,500 cyberattacks on 36 Ukrainian targets in the previous two months. He said that: ‘Ukraine’s investigations point to the “direct or indirect involvement of secret services of Russia, which have unleashed a cyberwar against our country.”
It’s therefore no surprise that in the latest crippling ransomware assault to hit the world this week (end of June), it was the Ukraine that was first affected with companies, airports and government departments being struck before the attack spread across Europe.
Further complexities arise when commercial interests come into play. Western technology companies have been contemplating demands from Russian regulators for access to source code for security products including firewalls, anti-virus applications and encrypted software. The requests are made prior to permission being granted to import and sell these products. This is on the basis that they need checking in advance to ensure they are not being used for the underhand spying of Russian systems.
This puts the tech companies in a dilemma. They know that inspections could provide an opportunity to find weaknesses in the products’ source code that could be used for malicious attacks, but do they walk away knowing that they are missing an opportunity to supply a huge demand for solutions?
According to Reuters the requests for source code have grown rapidly since the souring of relations between Russia and the USA following the annexing of Crimea in 2014.
There is no doubt that successful civilisations work in an atmosphere of mutual trust. It is a critical foundation of the digital age, and a lack of trust undermines progress. However, while we struggle to withstand the onslaught of attacks in the current cyber war, without being able to identify the attacker, trust has to be very cautiously meted out. I would even go so far as to say that a ‘zero-trust’ approach should be taken.
The security strategies adopted by companies in today’s complicated IT landscape have to change. They must, on the one hand, be able to take advantage of digital opportunities, such as cloud computing and the provision of web applications, but they also need to ensure that any access to those critical business functions is totally secure and strictly controlled.
The only way to do this is by using systems that provide granular access controls to assets based on trust, and it should be measured across all devices, software, users and systems at all times. Connections should be permitted only on the basis of having a deep knowledge of where a connection initiates from and where it is going to, validation of relevant credentials and continuous monitoring to ensure access is restricted only to approved assets.
Larger populations of devices are accessing more complex, shared infrastructures in response to user demand. It is a feature of the digital age and supports the way that we work, communicate and live. But sharing also means exposing our systems to the burgeoning community of cybercriminals and their sponsors, and this we cannot afford to do. They are living beyond the reach of law enforcement, and only time will tell if they are also beyond the reach of international conventions and treaties. A zero-trust approach is the only way that, for now, we can protect ourselves against the insidious creep of cyber warfare.
With the amount of access organisations have to data growing exponentially, many businesses are still trying to grasp what they should be doing with it once they have it. If harnessed correctly big data has the potential to transform businesses. But there are also many factors which can block you getting to this point.
By Andreas Lang, analytics engineer, Aquila|Merkle.
So what are the most common issues around big data - how do you avoid them? In order to understand the issues, you first need to understand the term. Academics describe it as data which is beyond the ability of commonly used software tools to capture, manage and process it all within a reasonable time. This description alone gives some indication as to how tricky it can be to get big data right.
Here are the most common issues faced by businesses looking to tackle big data and how you can avoid them.
Big data is all the craze right now, a quick google search will pull up a 1000 articles telling you that your business will implode if you don’t have an effective big data strategy already in place. But in reality, the hype of big data doesn’t mean your business has to do it right away. Ensure that you have problem to solve or use case before jumping to anything technical. This should be something that your traditional tools are failing to solve, perhaps your business is facing an issue where it’s relying on extreme subsampling or aggregation to achieve things within the time constraints which have been imposed? The important thing is that you have a problem that’s relevant. If you don’t, then you may well not need big data at this particular time.
If you do decide that your business needs big data, make sure you don’t fall at the first hurdle. The most obvious way your data pipeline can fail is when you capture it. It won’t help having the infrastructure to store and process big data, if you're unable to capture the right type in the first place. It's better to have less and the right data, than have more and the wrong data. Failing to capture the right, wrong or invalid data is a big issue and more common than you’d think.
Once you’ve made sure that big data is definitely useful to your business, you’ll need to ensure that the data that you’re capturing is ready for processing. This is the tricky part, as even proving the worth of captured attributes may require a platform which is more powerful than the tools at your disposal. Despite this, be careful. Don’t rush to commit to a binding contract or expensive analytic kit right away. Instead, hire a commercial provider to provide you with a trial platform, or use on the various cloud providers using temporary infrastructure. Feed your data into this and ensure that the data input is quality controlled and that the processes can be repeated if required. Once you start to merge these differed data streams, you may realise that attributes are missing in order for them to merge properly. Make sure to fix your existing processes first, this is the area where you really need to take your time and get it right.
The hardware investment you should reduce in the ‘experimentation’ phase is something you’ll face at some point, but an equal or even bigger part of your investment needs to be in people. I’ve witnessed instances where expensive hardware was in place but no in-house capability of using it. Take your people with you and the long term gains will be much higher and the results rewarding.
Don’t treat data as a project. A project has a start, milestones and an end and you shouldn’t expect things to work like this. Your data strategy needs to allow you to adapt to change quickly and incorporate new data streams as needed. If you don’t take care of the long term management, your data ‘project’ might transform into a data tomb.
You’ll have noticed that technical issues didn’t feature prominently and there’s a good reason for that. There aren’t many reasons why big data does not deliver. If there’s a technical issue it can likely be solved before the data enters the system or by ensuring you have bright people to work up a solution.
Failure is often caused by either a lack of purpose, lack of prior experimentation, the right people or the lack of adapting to change. Before assuming your data isn’t working, take a step back and think about why.
In just a few years, free public Wi-Fi has rapidly become an expectation for today’s mobile-wielding consumers, especially in cities. Consumers expect to be able to connect everywhere – from shops and restaurants, to parks and museums. This has led to an increased provision of free, public Wi-Fi. The increased variety and use of personal devices, coupled with the rise of social media, means that everyone expects to stay connected wherever they go, and this requirement for free Wi-Fi is only set to grow as the Internet of Things continues to gain traction.
By Nick Watson, VP EMEA at Ruckus Wireless.Around the world, cities are becoming more connected. Collecting data everywhere helps planners make smarter decisions and deliver new services. Before they are able to start meeting those demands, though, they need to plan for capacity and speed to ensure a high-quality experience. A robust wireless network is a key part of this preparation – it is the “glue” that holds smart cities together, enabling effortless sharing of workloads with datacentres and bridging connectivity across wired and wireless. So, what can we expect from the smart cities of the future?
Creating digital communities
Many of us take Internet access for granted. The reality is half of the world’s population does not have access to the Internet, creating a digital divide. Smart cities will help address the economic and social inequality caused by this divide, by providing Internet access to all citizens.
Bridging this divide will help bring communities closer together and encourage citizens to play a more active role to local councils. Flawless connectivity will improve city infrastructure and make it possible to engage with the community, by removing the roadblocks that complicate access to local services.
Wi-Fi is the platform that will provide the foundation for smart city success, as it has immediate applications and can effectively connect a vast range of wireless technologies that will be involved in creating smart cities. As Jesse Berst, Chairman of the Smart Cities Council puts it – “fast, reliable broadband is the backbone of a smart city. It’s Job One.”
Connecting tourists with city-wide Wi-Fi
When travellers arrive from abroad, the first thing they do is switch off their data subscription. However, this is actually the precise moment when they need it the most. Data is essential to help them navigate the city, providing access to information such as maps and local amenities. They will always be looking for Wi-Fi to enable their journey to continue smoothly.
Smart cities will be equipped with the technology to help tourists make their way with continuous connectivity. Whether it’s used for accessing local bus timetables on their mobile devices or downloading maps to local museums, city-wide Wi-Fi is key to connecting people to knowledge. In today’s world, access to the Internet is considered a necessity. Connectivity should not drop as people move between shops or hop on and off transit.
Smart utilities for sustainable cities
In a smart city, lighting will automatically be switched off when it isn’t needed. It will be able to detect when people are on the street and work accordingly, reducing energy waste. In the near future, we can expect to see more city planners equipping their streets with smart lighting that uses sensors to track when there is high or low public footfall.
Future smart traffic management is likely to be a core feature of smart cities. This includes centrally-controlled traffic sensors and signals automatically regulating the flow of traffic in response to real-time demand, with the aim of smoothing flows of traffic to reduce congestion.
New technologies will play an important role to help cities of the future promote sustainable energy use. For example, “smart bins,” that alert collectors when they need to be emptied are already being used today and we can expect to see more of them crop up in cities across the world as they embrace smart technology.
Transforming business for tech-savvy commuters
Revenue-generating applications will transform the way businesses in smart cities communicate with their customers. In addition to an increased use of digital signage, to communicate offers and promotions, we can expect to see an increased use of beacons, which send notifications to customers’ smartphones as they enter a store.
It will also transform the way people work and tech-savvy commuters will benefit from smart city technology to work on-the-go.
The continued spotlight on DevOps is understandable: the theory is that it bridges that traditional gap between the development and operations teams, so that they can work together in a more end-to-end, seamless way. This contributes to faster, more efficient release of software, which in turn improves time-to-market of products and services, or enhancements to internal projects, even better compliance to legislation or industry standards.
By Sven Erik Knop, Principal Solutions Engineer at Perforce Software.
When it works well, DevOps is also a natural partner to other methodologies, such as Agile, Continuous Integration and Continuous Delivery. These all share some common attributes, such as rapid feedback to enable faster response to change or customer requirements, greater collaboration and transparency among contributors.
The theory is sound, but practical implementation is a totally different matter and the truth is that many organizations are still finding their way with DevOps. The good news is that there are some organizations who have forged ahead with DevOps adoption and as they began to share their own experiences, there is more knowledge out there that can be passed along to their peers.
While DevOps is going to differ from organization to organization, some common threads have emerged. For instance, central to most DevOps visions is having a single, unified view of a software project. Often referred to as the ‘single source of truth’, the idea is to have everything in one place: not just code, but all the components of the environments in which they run (apps, production and pre-production environments). The goal is to improve communications, visibility and the ability to unearth and address any issues early on.
The ‘single source of truth’ can also give teams the ability to recreate a production environment based on what is in the single source of truth (typically a version control system), so that they can see how that app might behave in a real production environment (rather than waiting until it’s in production and then finding out if there are problems, which is not ideal if it’s already in customers’ hands).
Problems can be corrected and, because all versions are in the system, it’s possible to roll back to a previous version and see where a problem occurred. Also, giving developers the ability to run tests on code in their everyday work, rather than waiting until it is in the hands of the operations team to create test cases, means that problems can be spotted and fixed early on. This is also good news for the Operations team, because some consistency and prevention of errors is built into the process at an early stage. Plus, developers can experiment safely and be more innovative, knowing that they can roll back to a previous version and are not changing the final production environment.
Good DevOps practice includes automation and self-service as much as possible, particularly within large-scale projects. Version control systems can also help to automate a lot of the work involved, such as notifying the release automation system that a release is ready based on the latest change. With that foundation in place, then it becomes easier to spin up production environments, or servers to host applications, because the manual effort has been removed. This is why we often see infrastructure as code (IaC) alongside DevOps implementation. IaC is virtualised approach based on governing communication services, memory, network interfaces and processors with software-defined configurations.
There are a few caveats: it is likely that contributors will have their own preferred platforms, systems, toolsets and workflows. For instance, in the electronics market, the binary file assets created by analog designers are not intuitively designed for easy sharing. Likewise, embedded software managers might have code, library and object files spread across different platforms, each with their own configuration requirements. Expect to see similar disparities in other industries, such as financial services, pharmaceutical and life sciences and semiconductor design.
So, it is probably going to be very important that the tool being used for the ‘single source of truth’ – typically a version control system – can support a variety of users and working environments, providing the transparency that the whole group needs. However, this tool needs to avoid imposing new workflows and technologies on workers (as we all know, people don’t like having their favourite tech tools taken away from them, or being told to use to new ones). Make sure there are clean integrations with the main tools that users prefer, whether ‘out of the box’ or easily created using an open API.
Furthermore, this single source of truth needs to be able to support a wide variety of digital assets, binary and non-binary. Depending on the industry, this might easily include art files, CAD drawings and support documentation. To not include all the files associated with a project could not only impact on the final product, but could also derail compliance. On this point, the single source of truth can also make work easier for external auditors, while also reducing the amount of time that internal teams need to spend collating compliance information (because it is already stored in the single source of truth).
For this reason, the system should also provide an immutable history of events (in other words, the facts cannot be changed retrospectively, for instance, when a piece of code is checked in, the record of that event cannot be altered, regardless of what happens with that piece of code in the future).
It is also important to not just focus on the initial stages of the software development, but instead, to look at the entire digital asset lifecycle, with a unified pipeline that enables automation and testing at every stage of a digital asset’s life.
These stages include: ideation (such as a feature request), definitions, design (including requirements), development, testing, deployment, release and maintenance. This end-to-end view makes it easier to ensure that nothing is accepted that deviates from the ‘single source of truth’ (or if it does, it is easier to spot). This might include a developer being asked to work on something that was not specified in the original set of requirements.
Finally, consider how fast a project can scale and whether the ‘single source of truth’ can accommodate escalating growth, users and complexity. After all, what starts off as an idea for a simple application can easily evolve into something more ambitious, with more features and involving Gigabytes, if not Terabytes, of digital files in the development process.
DevOps is, of course, fundamentally a methodology and is primarily about processes, user behaviour and cultural adoption. That said, some of the most successful DevOps projects in which we have been involved have embraced the role that supporting technology tools play in achieving good DevOps. While creating a ‘single source of truth’ and other technology approaches are only part of the recipe for DevOps success, they make a significant contribution.
The changing dynamics of today’s digital world are pushing enterprises, irrespective of industry, to adapt not only products and services, but also business processes. Technical innovations are constantly evolving and driving digital assets to become a key priority for businesses. Research estimates that B2B spending on Internet of Things (IoT) technologies, apps and solutions will reach US $267 billion by 2020. IoT is on the path to rapid growth, therefore it’s critical that enterprise leaders consider how to measure the full impact of their IOT investments, as well as remaining relevant and sustainable.
By Jayraj Nair, Global Head of IoT, Wipro Digital.
IoT is just one of the ways that companies can digitally transform to improve customer experience and to undertake new ways of working. A recent study conducted by Wipro Digital found that 94% of C-suite leaders surveyed believe that investing in IoT, data and connectivity will save their company more than US $50 million over the next five years. Nearly 40% believe the savings could even total US$100-$250 million. However, the potential rewards of IoT investments can - and should - go beyond money saved. To define an accurate image of return on investment (ROI) success, companies should also consider, and potentially even prioritise; efficiency gains, improved customer service and technological investment.
The recent study also found that 58% of respondents felt increased operational efficiency was the greatest benefit of investments in IoT, data and connectivity. This is likely because IoT has enormous potential to increase outputs while lowering inputs across multiple departments and industries. An example of IoT in practice can be seen in recruiting services. A recruiter will speak to a candidate, and then input data into a specific portal, filling in lots of different forms and perhaps repeating information. This process takes time and requires accuracy. By implementing a digital automation process and streamlining this task, the business can improve efficiencies, whilst at the same time using data collated to understand customers better. To increase ROI on IoT investments, leaders must closely examine which areas of their business stand to gain the most when it comes to gaining efficiency. This will likely differ not only industry-to-industry, but company-to-company.
Businesses should constantly be asking themselves how they can create an experience that continually adapts to changing customer needs. An organisation that can enhance their current value proposition, by tailoring it to the individual wants and needs of their customer, will ultimately create an overall positive customer experience.
Most companies are lagging when it comes to delivering better customer experience in the digital age. Getting customer experience right is paramount – especially for a customer base increasingly comprised of millennials, a generation that has come to expect a customised, seamless customer experience at all touch points. IoT has the ability to make strides in customer service quality, yet nearly half of respondents (46%) in the survey believe innovation of products and services is a major beneficiary of investments in IoT, data and connectivity.
To drive customer service and experience-related ROI, companies can implement IoT to streamline customer service and iron out pain points in several ways. The most significant of which being the ability to collect data continuously and in real time. When customers interact with an IoT-connected product, the item gathers detailed information, allowing companies to deliver highly customised customer service and eliminate previously unseen experience-related issues. By leveraging typical technologies and data that they may already have, and then morphing these into the technologies of tomorrow, this can reinvent any enterprise.
As companies strive to compete in an increasingly digital world, investments in IoT, data, and connectivity are the key to exceptional customer experience, streamlined operations, and self-innovation. These types of technologies bring a whole host of benefits, especially the potential to improve a company’s security. According to the survey from Wipro Digital, many leaders are confident in the security benefits of IoT, with 58% of respondents from multinational tech companies seeing security and safety as one of the top areas benefitting from IoT, data and connectivity.
IoT systems, for example, deployed with real-time analytics and intelligent access control, such as sensors and cameras, can prevent a security breach by alerting the relevant administrative personnel for appropriate action. Additionally, understanding potential security risks ahead of investments in IoT can help companies prepare for possible attacks, as well as lowering the costs associated with the aftermath of an unforeseen attack. Pre-empting attacks may initially require companies to invest more on defence, however in the long run, installing this type of technology will ultimately help a business save money by avoiding damage-costs and preserving the valuable data they own.
Each and every business, no matter what size, must understand its own wants and needs when it comes to adopting IoT methods into their business model. Digital is a way of working, and along with process, products and services –becoming digital successfully is in essence, a huge brand change, but one that has many benefits. The survey from Wipro Digital emphasised that a staggering 80% of respondents cited data and connectivity as their top strategic priority, highlighting that IoT is a clear driver for digital growth within all industries. Businesses considering investing in IoT technology need to think not only in terms of funds saved, but also in terms of efficiency gains, improved customer experience and technological investment.
A few weeks back I went to a cloud meetup in London. Everyone was talking about the technical wonders of cloud: the scalable infrastructure, the next generation data warehouses, machine learning models, ‘serverless' computing and all the bells and whistles. Lost in the conversation about next generation computing was that particularly challenging bit of managing the economics of cloud. Cloud forces a shift from thinking about IT spending as static and predictable, to something that requires ongoing management in a continuous, fluid cycle.
By J.R. Storment, Co-Founder & General Manager, EMEA, Cloudability.
Devops and cloud metered billing models have decentralized deployment of resources to more people, removed cost governance and involved more people in the planning decisions. With the public cloud services market projected to $246.8 billion in 2017, an expanded focus on cloud economics is needed.
After watching thousands of companies go through this journey as part of my role as a co-founder of a company that does spend optimisation for cloud, I have seen a consistent financial journey play out over and over again. It goes something like this.
In the receding datacenter world you do a lengthy capacity planning exercise, model the costs ahead for 3 years, then get budget approval to make a sizable hardware and license purchase. The costs are predictable and they are CAPEX. Finance is comfortable with CAPEX as it can be depreciated over a number of years and it doesn’t affect the bottom line in the same ways as OPEX.
But the usage allocation of the spending in the datacenter is opaque; some hardware paid for from one budget may be used by another cost center. Or it may not be used at all. No one really knows for sure; but that’s OK because it’s all within budget.
One day the finance team reports that they are paying out expense claims containing IaaS (like AWS or Azure) by rogue teams who don’t want to wait for servers to be provisioned. Eventually those claims get big enough that the finance team begins to push back. Rather than fight the rising tide of cloud, the company works with IaaS provider to consolidate all the spending into a single payer (or subscription) account so that they can get better control over the spending — and also access volume discounts.
The problem is solved for a while, until it’s not. While not the biggest pole in the tent, the cloud spending becomes one of fastest growing OPEX expense. This makes the finance people uncomfortable and gets the attention of the c-suite.
Worse, if the company is delivering a service to their customers with cloud—e.g., a retailer—the spending is accounted for as COGS (Cost of Goods Sold, or Cost of Sales). As the finance people learned in university, the formula for gross margins is “revenue minus COGS divided by revenue.” Because of this, every dollar they spend on cloud begins directly affecting the company’s margins.
As companies rush to the cloud to realize the benefits of speed and agility, they soon find that the spending controls they had in the datacenter have disappeared: hundreds of engineers at the company have the ability to affect the bottom line autonomously, every day, simply by clicking a few buttons to launch an instance in their IaaS console.
The company attempts to put the brakes on cloud projects and a Director advocates that they move the cloud projects back into their on-premise estate. That idea makes the development teams unhappy: they want to work with the latest technologies, not go back to the world of the datacenter. This is the complicated cycle of cloud politics playing out across the business world today.
Often when costs go too high, someone suggests that the company could reduce the amount they are paying with pre-payment options like AWS Reserved Instances. But the finance teams don’t know how to account for them: Can they be capitalized? Are they prepayments of OPEX? What if the Reserved Instances aren’t used for the application they were purchased for?
For example: what if they were purchased for an R&D project and capitalized as CAPEX, but end up being used for a Production system and magically become OPEX thus affecting margins? The questions keep coming and no one has any good answers, so the company does nothing and puts off the decisions.
Six months later the budget for a key application— one that was to be the poster child for a planned move to cloud——comes back 50% higher than expected. It turns out the project had been modelled on the assumption that Reserved Instances would be purchased for all the resources being used. Because of the Finance team’s confusion above, none were actually purchased. Further the assumptions for the number and size of servers they would need were too simplistic and didn’t factor in peripheral costs like data transfer or performance upgrades like increased disk IOPS.
The move to cloud isn’t just about technology. It’s a sea change in how an organisation manages their IT spending. When running in IaaS the number of people whose actions impact the financials of the business explodes. Everyone is responsible for the spending, everyone can impact margins.
It’s much like the shift that happened in Japanese car manufacturing in the 90s. The introduction of continuous improvement disciplines, like Kaizen , brought the idea that everyone—from the factory worker to the CEO—have a daily duty to implement small changes to improve operational efficiency. This was in stark contrast to the prevailing models of improvement which had long lag times between concept and implementation. The leap from ‘buy-early-and-infrequently’ in the datacenter to ‘buy-just-what-you-need-when-you-need-it’ in the cloud mirrors that transition.
The shift to a kaizen-like continuous optimisation model requires constantly available spending data and workers who are empowered (and rewarded) to make financial optimisations to cloud. Teams need to see what they are spending that day, then be able to see the impact of changes on spending in a tight feedback loop. It's called the Toyota Prius effect: when your car's dashboard shows how a heavy foot wastes fuel, you start to drive with a lighter one.
If this cultural shift sounds like a lot of work, that’s because it is. The shift can take many years at a large enterprise. Some of the people from the old days won’t make it through the changes. It’s well worth the sea change: Kaizen transformed the competitiveness of Japanese manufacturing culture to one of dominance. A culture of kaizen-like cloud spend management can transform the competitiveness of a company by allowing it to offer better services, at better rates, and more quickly.
Here’s how that translates in the business world: two companies, each using cloud to sell cans of beans on their websites, may spend drastically different amounts to deliver the same services on cloud. One company has rightsized their resources, purchased RIs, and driven ongoing accountability for usage optimisation down to the level of all the workers. One has not.
The optimised company’s margins are higher as a result and they can reduce prices to further cement their dominance. The other spends more money to make the same revenue and has less pricing control. The impacts of this gap resonate all the way to their stock prices.
Not all companies record cloud spending as a “Cost of Sales”, but all companies that I’ve met struggle with the fundamental challenges of optimising their cloud spending, accounting for it properly and going from a static data center world to the continuous-improvement cloud world. Here are five key takeaways to start down the right path:
1) Build cloud financial muscle to improve agility
While many companies are focusing on the technology of cloud, we have found that not nearly enough are ramping up similarly on cloud economics and financial optimisation muscle. Furthermore, companies are unsure which teams need to ramp up on this, or how to accomplish it. Most teams don’t understand the significant changes that metered billing brings to complexity of IT finance. Without this awareness, consideration of financial impact of technology decisions can completely derail financial models. We recommend you create a cloud economics team in your Cloud Centre of Excellence and integrate cloud finance processes into day to day decision making at all levels.
2) Centralise paying less, decentralise using less
The most effective companies appoint a central cost team (sometimes called the ‘cost czars’ or a ‘cloud economists’) to focus continuously on driving down the hourly rates the company pays through rate optimisations like AWS Reserved Instances (RIs). This team analyzes the hourly usage trends of across the entire estate to work with IaaS providers on volume discounts and reservations. On the flip side, they make the individual application teams responsible for reducing usage waste and rightsizing their resources with tools like Cloudability.
3) Empower your teams with spending data and insights
What gets measured gets improved. To improve your cloud efficiency, you need to track it closely, provide visibility into it constantly and give teams actionable insights to reduce what they are using. This means giving each application owner a filtered view of just their spending and holding them accountable to staying within their budgets. Benchmarking of your teams against their own prior performance and that of their peers also creates a healthy tension to improve. The data you give them needs to be up-to-date, precise and granular so they can understand the impacts of their decisions on the economics and adjust accordingly.
4) Get clear on your migration goals and then model outcomes
Understand that there are an infinite number of migration journeys you can take, each with different potential cost models. You could refactor, you could lift-and-shift, you could replatform. You could buy Reserved Instances, you could leverage new technologies, you could split workloads between clouds. Too many companies use simple pricing calculators to model their migration that don’t hold up to reality.
When working with customers, we leverage observed historic data across billions of dollars of cloud spending to determine probable outcomes to guide future strategy providing insight into questions like “When and how should we migrate into cloud?”, “What will be the financial outcome of refactoring applications for scalable cloud architectures?”, “When and why should we replace old technologies for new technologies and what are the financial impacts?”. Remember though that the best strategy in the world doesn’t automatically result in outcomes so ensure that you have a team dedicated to tracking against your financial models.
5) Focus on unit economics over total spending
As you move more applications out of the datacenter or scale up your customer base on cloud, the cloud bill may continue to go up despite your best optimisation efforts. That’s okay. An increasingly cloud bill can mean you are executing on your plans to migrate from the datacenter or that your business is growing. Begin to shift the conversation to reducing unit economics: can we reduce the cost per transaction, per customer, per trade? These metrics provide a more meaningful picture of the cloud spend on the business than simply total dollars (or pounds) spent each year.
At Cloudability we’re really proud to help manage over £5bn of cloud spending, saving companies like Uber, Cisco and Atlassian a lot of money along the way, and enabling GE’s CTO to understand which applications are driving costs. We’ve recently launched Cloudability in EMEA and ANZ; it’s an exciting time to be helping companies migrate cost effectively to the cloud by avoiding some nasty pitfalls.
It is becoming clearer than ever that consumer demand, tastes and habits continue to shape the arc of technology innovation. In fact, it’s easy to imagine that the next wave of products designed for the enterprise may reflect a mandate that businesses and their IT departments support their employees (a.k.a. consumers) across whatever kind of devices those employees are already using.
By Curtis Peterson, SVP Cloud Operations, RingCentral.
Still, despite the continuing pervasiveness of this consumer-first mind-set, there’s some new and compelling evidence that some of the most hyped—and overhyped—consumer technologies at last are on a promising path. But the catch? That path is actually in the enterprise. Let’s walk through three areas that look to have the most potential, and why.
5G is an exciting step forward because it eradicates the coverage gaps, latency issues and network density limitations that have plagued traditional wireless networks. Removing these barriers opens up new possibilities at the enterprise level for capabilities like real-time controls and instantaneous delivery of high-bandwidth content, such as augmented and virtual reality.
What impact will these new capabilities have? One likely result of real-time controls is that they will expand remote work beyond the traditional knowledge worker and into the realm of mechanical and electronic technicians. For example, a warehouse worker or HVAC technician no longer has to be on-site to move pallets or diagnose a broken compressor. On the content delivery side, companies in the hospitality, tourism and real estate spaces will have a new suite of rich media sales and marketing tools (VR tours of available properties, for example), and media/entertainment companies will be able to push out such high-bandwidth content faster and more easily.
Will the reality of 5G match the hype? I think it will, eventually, but right now the massive spend required by carriers—with uncertain upside for them—is a major barrier.
Just when it seemed like smartphones had nowhere else to go in terms of advanced functionality, Qualcomm introduced its latest Snapdragon chip, which is embedded with machine learning and neural processing.
This innovation will eventually bring AI-powered real-time communications directly to the phone, eliminating the need for a network connection to access contextual search, virtual assistant, and other AI-related capabilities. I’ve been excited thinking about how such capabilities will further streamline collaboration at the enterprise level.
Perhaps even more impactfully, AI-enabled smartphones (or other devices with these same chips) will allow enterprises across a wide swath of industries to shave costs in two key ways. First, by helping businesses create customer-facing mobile apps that capitalise on contextual intelligence and background analytics. Imagine, for example, the time and cost savings for an insurance company verifying and processing an insurance claim when they have the background analytics to prove the time and date of the incident, that the geo location wasn’t manipulated, and other key data points.
Second, by making it possible to create cheaper but more sophisticated wireless portable instruments — tools like digital thermometers, tyre pressure gauges, and other diagnostic devices. If you smarten up these tools by making AI local in them while keeping other specifications minimal, you bypass the need for costlier custom-built devices.
We’ve seen a lot of useless applications of IoT in the consumer space, particularly in home automation applications. But IoT is poised to deliver meaningful advancements in the industrial sector, particularly when it comes to systems for monitoring and controlling.
How? IoT devices collect an enormous amount of data—data that previously had to be imported and exported to and from closed loop industrial systems to analyse. With so much newly available IoT data driving insight into production and other processes, opportunities to squeeze out higher profits are much easier to spot. Gartner estimated that last year there were 6.4 billion IoT devices connected to the Internet and they estimate the number will increase to 20.8 billion by 2021; we are going to see an explosion in IoT’s impact on manufacturing. Pressure sensors and other factory-floor devices may not make for a glamorous story, but the gains to business will be real.
In the bigger picture, the enterprise inroads these three technologies are making are further evidence of the continuing democratisation of IT. Technological innovations continue to open up what were once advanced, costly and exclusive capabilities. And consumers — perceived, as the purveyors of stealth IT, as headaches by many enterprise IT departments — may have been leading the enterprise along a path to higher productivity all along.
Today’s cyber-criminals are constantly adopting new and increasingly innovative tactics in their efforts to gain access to sensitive business information. In this age of increased mobile and remote working – where employees today no longer hope, but expect flexible options that cater to their work-life balance – this represents a significant device-centred security risk, which often serves as the hacker’s gateway to such data.
By Neil Bramley, B2B Client Solutions Business Unit Director, Toshiba Northern Europe.People are the weakest link in any security chain, and an employee on the move is harder to control – and therefore more vulnerable to attack. They are more susceptible to ignoring IT protocol when accessing sensitive files away from the office, while the likelihood of losing a device while on the move is also increased. These heightened risks associated with mobile working are only amplified by the vast and ever-increasing swathes of business critical or consumer data today’s businesses manage every day – allowing such information to fall into the wrong hands can have major ramifications for any business.
Device security therefore needs to be a priority for business, but without compromising on the capabilities required to handle the demands of today’s mobile worker. Hardware must be portable and easy-to-use, powerful enough to run multiple business applications, while also remaining robustly secure as a first line of defence against cyber criminals. Business devices which have built-in security tools such as biometric fingerprint sensors and IR cameras can act as an effective barrier to cyberattacks, hindering any third-party interference. Features such as encryption can also offer a deeper layer of protection, shielding areas of the hard drive for optimal peace of mind. It’s essential to note that business devices are carefully designed with each of these key considerations in mind, whereas consumer products are often lacking in these business-specific security capabilities.
As the risks and associated costs of a data breach increase however, so must businesses evolve their IT security infrastructure. The second, more judicious approach for device security is to consider solutions which shift sensitive data away from a set device and centralise permissions and data access management. Mobile zero client solutions offer an ultra-secure way to resolve device security risks. Such solutions, like Toshiba Mobile Zero Client, are designed to remove the threats of mobile working while ensuring employees retain the functionality to work productively. With the device effectively working as a sophisticated mobile terminal to the virtual desktop system – and as a result continuing to offer many of the wider built-in benefits of a mobile laptop device – mobile zero clients simply take the data out of the employee’s hands and place it securely in the cloud via a Virtual Desktop Infrastructure. This removes the threat of malware being stored on devices, as well as nullifying concerns about data being compromised should a device be lost or stolen.
Such solutions become even more important as European businesses strive to become compliant with the new General Data Protection Regulation (GDPR) coming into force next year. Never has it been more important for European organisations to identify ways to best manage their data, especially given that many companies are still worryingly under-prepared for what will be the most significant shake-up of data protection law for 20 years. According to research undertaken by Gartner, over 50 per cent of companies affected by the new regulation will not be in full compliance by the time it comes into effect on May 25th 2018.
A serious data breach may already result in serious ramifications in terms of reputational damage and loss of custom, but the GDPR adds further stringency to the ways in which businesses must manage sensitive data or risk facing penalties, fines and even legal action. With such added responsibility and culpability for the safe storage of data comes greater pressure for senior IT staff, ensuring that security must therefore be the number one priority for now and the future.
In 2017, McKinsey conducted a study on productivity gains driven by technology transformations, such as the steam engine, early robotic technology and advances in information technology. The study finds manufacturing on the brink of the next generation of industrial automation revolution with unprecedented annual productivity growth of between 0.8 – 1.4% in the decades ahead.
By Mike Brooks, Senior Business Consultant, AspenTech and Former Mtell President & Chief Operating Officer.
Advances in robotics, artificial intelligence and machine learning will match or outperform humans in a range of work activities involving fast, precise, repetitive action and cognitive capabilities. To remain competitive, complex industries need to deploy industrial automation more than ever, as intense global competition drives process industries to increase efficiency through reduced operating costs, increased production, higher quality and lower inventories. The highest priority should be to eliminate production losses caused by unplanned downtime and address a $20 billion a year problem for the process industries. As such, increased asset utilisation will bring the single biggest financial improvement in production operations.
For the last approximately 50 years, maintenance practices have evolved in terms of equipment reliability and availability. Maintenance strategy has progressed through run-to-failure; calendar-based; usage-based; condition-based and reliability-centered maintenance. Outcomes are improved but the equipment continues to fail. Why? Despite each successive technique becoming more complex, they are not addressing the main issue. Industry analysts, such as ARC Advisory Group, have pointed out that more than 80% of all equipment failures are caused by operating equipment outside its stipulated design and safety limits. Current practices that only focus on 20% of the issues involved cannot detect problems early enough, lack insight into the reasons behind the seemingly random failures and cannot detect seemingly “random” equipment failures. Manufacturers need solutions that arrive at the confluence of both maintenance and operations activities to address 100% of failures.
Significantly, these solutions need to offer failure prevention using data-driven truths rather than guesstimates. The combination of mechanical and process induced breakdowns costs up to 10 per cent of a worldwide $1.4 trillion manufacturing market per a 2012 report from The McKinsey Global Institute. While companies have spent millions trying to address this issue and ultimately avoid unplanned downtime, until now, they have only been able to address wear and age-based failures. This is where using machine learning software to cast a “wider net” around machines can capture process induced failures.
To avoid unplanned downtime, companies must identify and respond effectively to early indicators of impending failures. Traditional maintenance practices do not predict failures caused by process excursions. That would require a unique technology approach combining machines and processes; particularly for asset-intensive industries such as manufacturing and transportation. With the right technology in place, organisations can sense the patterns of looming degradation, with sufficient warning to prevent failures and change outcomes.
Advanced machine learning software has already demonstrated incredible successes in the early identification of equipment failure. Such software is near-autonomous and learns behavioural patterns from the streams of digital data that are produced by sensors on and around machines and processes. Automatically, and requiring minimal resources, this advanced technology constantly learns and adapts to new signal patterns when operating conditions change. Failure signatures learned on one machine “inoculate” that machine so that the same condition will not recur. Additionally, the learned signatures are transferred to similar machines to prevent them being affected by the same degrading conditions.
For example, a North American energy company was losing up to a million dollars in repairs and lost revenue from repeat breakdowns of electric submersible pumps. The advanced machine learning application learned the behaviour of 18 pumps. The software detected an early casing leak on one pump that caused an environmental incident. Applying the failure signature to the rest of the pumps provided an early warning, allowing early action to avoid a repeat incident, thus preventing a major problem.
In another case, a leading railway freight firm operating across 23 states in the US used machine learning to address perennial locomotive engine failures costing millions in repairs, fines and lost revenue. The machine learning application operates in-line, in real-time and was deployed on a very large fleet of locomotives examining lube oil data for extremely early indicators of engine failure. The application even detected a degradation signature while the engine passed a low-pressure test. Diverting the locomotive for immediate service “saved the company millions of dollars in costly downtime and fines.”
With the emergence of the IIoT, big data, machine learning and other analytics, a huge market opportunity to better address reliability and availability is presented. While data science methods can deliver data-driven truths, they offer no opinions at all. Thus, applications can use the data to better understand asset and process health, as well as the effect on consumer deliverables to mitigate maintenance decisions. A new generation of analytical capabilities with the ability to provide deeper insights is mandatory. Operators need accurate, predictive solutions to provide much earlier warnings of impending trouble. They also require prescriptive guidance to avoid or mitigate the forecast outcomes. As such, the potential pool of viable solution providers will be selected based on their ability to execute accurate yet early predictive and prescriptive capabilities. However, data science alone cannot solve the entire problem. Providers need to table deep domain and process expertise, as well as the experience and knowledge about design, production and maintenance systems.
McKinsey sees entirely new and more affordable manufacturing analytics methods and solutions emerging with Industry 4.0. Both asset and process analytics are jointly responsible for creating a multi-faceted asset view to enable fact-based decision making that considers a broader set of trade-offs.
Industry 4.0 also brings new mechanisms based on big data and machine learning. In fact, machine learning can crunch data lakes to discern patterns and predict future outcomes with great certainty. Yet, it cannot solve everything. A combination of models and machine learning can detect and avoid risky process operating conditions. It can explain explicit conditions anytime, calibrate and tune the model automatically via machine learning to achieve timely, accurate process status with simpler calibration.
ARC Advisory Group has further asserted that APM 2.0 incorporates new analytics and data from control systems and asset management applications, providing new opportunities to optimise availability and operational performance. APM 2.0 strategies include sharing information from other systems, such as manufacturing execution systems, to deliver a comprehensive view and analysis of production processes and asset performance. Leading edge analytics view data from all systems to assess patterns of normal and abnormal behaviour, which helps to predict future conditions and their inherent causes. Such activities promote deeper collaboration between operations and maintenance staff via intertwined decision making, driven by shared goals. The combination of data supports shared and improved understanding of risk, which provides the ability to balance operational constraints and efficiency opportunities to improve return on assets.
Companies can no longer rely solely on traditional maintenance practices but must also incorporate operational behaviours in deploying data-driven solutions. Today’s imperative means extracting additional value from existing assets and implementing an advanced machine learning programme to deliver fast improvements. With the right software solutions, predictive technologies will detect the conditions that limit asset effectiveness, while providing prescriptive guidance that assures firms remain profitable and improve margins.
A few weeks ago, the Durham Chief Constable Mike Barton suggested that we need security ratings for all Internet-connected home devices. This kind of revaluation IoT security is exactly what we need. The recent attacks on Internet of Things (IoT) devices have been a lesson to IT that it’s not enough to rely on the inherent protection from device manufacturers.
By Santhosh Nair, VP of IoT, MobileIron.Take the Mirai botnet attack, for example. This took down websites with thousands of users including Netflix and Twitter. If we consider the possible impact that such an attack could have, this was relatively harmless. IoT has real-world applications in a number of industries, from healthcare to energy, manufacturing to automotive. IoT initiatives are heavily reliant on cloud, so these malware threaten both commercial and consumer services which rely on cloud services. These devices control the physical world and so it’s not a stretch to say that IoT security could be a matter of life and death.
Security ratings for IoT devices are a step in the right direction but, it’s just not enough. Companies with the strongest IoT strategies will ensure that implementations are secure by design. Plans for protection can’t just be bolted on as an afterthought.
IoT is set to transform everyday life, from inside homes to right across society, with research firms predicting that, by 2020, there will be 50 billion IoT-enabled devices worldwide.
The potential of IoT is that it has the power to connect machines that produce data which can be used to provide us with new information. The Internet of Things has the potential to provide real value for organisations. While deployments may be on the horizon, building a security strategy now can help guarantee that the business value of IoT is delivered while any security risks are mitigated.
On the whole, IoT device manufacturers have failed to prioritise security. Manufacturers often aren’t equipped to guarantee security and they are under constant margin pressure on their equipment. Sufficient IoT protection requires considerable investment and a specific set of skills and knowledge. As things stand, the inherent security provided by many device manufacturers is simply not enough.
A recent HP IoT Study found that around 80 percent of IoT devices lack password complexity, 70 percent don’t encrypt communications and 60 percent have insecure user interfaces. This, combined with the fact that the majority of these devices lack the ability to upgrade within a reasonable time frame, is very concerning.
IoT adoption comes with its fair share of hurdles but, by far, fears around security are the biggest obstacle standing in the way. One of the most concerning aspects of IoT is how many businesses are pursuing the opportunities of connected devices without considering the security implications inherent in the IoT. This is an area where we’d like to see IT departments taking a much stronger lead, but instead we are seeing IoT initiatives proceeding with minimal oversight and input from internal security specialists. In this way, we face a new problem of “Shadow IT”, where organisations unwittingly pursue policies and initiatives that are antithetical to their own corporate security policies.
Let’s think about healthcare. With the rise of digital health initiatives and the influx of med tech, this is an industry that’s shaking up and looking for more efficient ways of working and ultimately, to drive better patient outcomes. It also highlights the terrifying effects of IoT-targeted attacks. Traditional malware would compromise our networks, devices and data; however, when it targets devices such as Internet-connected respirators or insulin pumps it genuinely becomes an issue of life and death.
Add to that the more traditional effects of malware, such as doctors failing to receive critical information, thereby causing crippling delays to patient care. These examples of healthcare shows how IoT vulnerabilities can send enterprises into disarray. The damage could be even more severe in the business and industrial sphere.
Security has to be top of mind from the beginning of any IoT implementation project. When it comes to designing enterprise IoT security frameworks, businesses should build protection into the device lifestyle. Here is a 7-step device lifecycle, including examples of the types of protections to consider:
Ensuring security at each of these stages demands a cloud that is able to support the scale of the device deployment alongside investment in embedded computing and operating systems. As we know, the technology industry is defined by its ability to constantly adapt and innovate, at an unprecedented speed.
The IoT security challenge that we face is difficult, but for businesses to go beyond small deployments built each time for specific use cases, it’s essential for IT to look solutions that are built for scale. Only then will we see the potential of IoT fulfilled and the business-driving benefits of this wonderful innovation.