The recent DCD London event was a great opportunity to take the pulse of not just the data centre industry but the wider IT community as well – as one vendor put it: ‘The lines between IT and data centre infrastructure are becoming increasingly blurred’. If there was one message to take away from the event, it would be that, no matter where you work, artificial intelligence is hurtling towards you like a barely controlled juggernaut full of Christmas goodies. There’s every chance that the vehicle and the goodies will arrive in one piece and provide you with hours, months and years of entertainment and rewards, but there’s just the slim possibility that, if the driver becomes too carefree and crashes before arrival, or the waiting customers don’t get out of the way when the lorry parks up, disaster could ensue.
Put it another way, the promise of artificial intelligence, machine learning, Internet of Things and other intelligent technologies is huge. But, don’t get carried away with the hype. Companies who have got to grips with this stuff are finding that there are levels of complexity (both human and technology ones) that need to be understood and addressed before meaningful and sustainable AI solutions can deliver major, ongoing benefits.
Folks that remember the virtualisation wave, will recall that the hype was replaced by the slow dawning reality that it was complicated to implement successfully, but worth the effort. Virtualisation took time to reach maturity.
Now, of course, virtualisation is the platform on which the software-defined era has been built, and a deeper foundation for the Cloud, and the enabler for the orchestration that is, perhaps, the crucial part of intelligent automation.
In the present digital, ‘want everything now’ world in which we live, the suggestion that intelligent automation requires patience and skill (and time) to implement properly might not be that welcome. Nevertheless, if my feedback from DCD is accurate, almost everyone who’s looked at using AI – both vendors and end users alike – is excited by the possibilities, then realises that an AI project throws up as many questions as answers, but still remains excited as overcoming these challenges will ensure that the end product is a robust, comprehensive AI solution, and not a half-baked one that will (as with early virtualisation projects) disappoint more than inspire.
The first question to ask is not: ‘How much money can I save with AI?’ BUT: ‘How can AI help improve my customer service?’ Along the way, you may well find that you do save money, but be open-minded as to where the AI journey might take you.
Workday, Inc. has released the results of a new European study, ‘Digital Leaders: Transforming Your Business,’ an IDC white paper, sponsored by Workday which reveals that there is a ‘digital disconnect’ emerging in many organisations between the expectations of digital leaders and the ability of core business systems to meet their needs. Challenged at driving digital transformation (DX) at scale, they see their finance and HR systems as merely ‘adequate’ for today’s business requirements but lack the flexibility and sophistication to deliver on the broader DX mandate being driven by many CEOs.
Leading IT market research and advisory firm, IDC, conducted the survey of 400 digital leaders in Europe across France, Germany, the Netherlands, Sweden and the UK, in 2018, sponsored by Workday. It focused on where the IDC-termed ‘digital deadlock’ was related to finance and HR systems, and the impact of DX on core business systems as organisations move beyond DX experiments and pilots to try and operationalise their DX strategy and embed it within the business.
The digital leaders surveyed identified the following three critical pain points, with digital leaders in the UK admitting that in their organisations:
Other key research findings included:
As part of the white paper, IDC studied a number of companies deemed ‘best-in-class’ in digital transformation in Europe. IDC suggested three areas for finance, HR and IT to work alongside digital leaders to help ensure this potential ‘digital disconnect’ does not materialise in the long run:
76% viewed this as very or extremely important. There is an understanding that to do this the ability to reconfigure finance and HR systems in a dynamic fashion is critical.
Best-in-class organisations are making significant investments in finance and HR systems. In fact, 88% of these organisations are investing in advanced finance, and 86% in advanced HR systems, to support the digital transformation journey
“Through the research we have identified three areas of best practice that can drive successful digital transformation, with DX placing new and challenging requirements on finance and HR,” said Alexandros Stratis, co-author and senior research analyst, IDC. “Firstly, the importance of a digital ‘dream team’ that cuts across finance, IT, HR and the broader business; secondly, the need to be able to dynamically reconfigure finance and HR systems; and thirdly, that best-in-class organisations who are leading the way are already making significant investments to modernise their finance and HR systems.”“The time to disrupt is now,” said Carolyn Horne, regional vice president of UK, Ireland and South Africa, Workday. “DX strategies are being rolled out by organisations as forward-thinking CEOs realise digital transformation across the business is critical if they are to survive and thrive. Technology is playing a critical role in this change, but this change is not just about how a company innovates around new products and services, or changes the customer experience, the whole organisation needs to adapt and modernise. This must include finance and HR, otherwise the agility, flexibility and insights needed to be successful in the modern business world cannot be achieved.”
DataCore Software has published the results of its 7th consecutive market survey, “The State of Software-Defined, Hyperconverged and Cloud Storage,” which explored the experiences of 400 IT professionals who are currently using or evaluating software-defined storage, hyperconverged and cloud storage to solve critical data storage challenges. The results yield surprising insights from a cross-section of industries over a range of workloads, including the fact that storage availability and avoiding vendor lock-in remain top concerns for IT professionals, and illustrate the status of the industry on its journey to a software-defined future.
The report reveals what respondents view as the primary business drivers for implementing software-defined, hyperconverged, public cloud and hybrid cloud storage. For example, the top results for software-defined storage include: automate frequent or complex storage operations; simplify management of different types of storage; and extend the life of existing storage assets. This portrays the market’s recognition of the economic advantages of software-defined storage and its power to maximise IT infrastructure performance, availability and utilisation.
The report also highlightsthe capabilities that users would like from their storage infrastructure.The top capabilities identified were business continuity/high availability (which can be achieved via metro clustering, synchronous data mirroring, and other architectures) at 74%;disaster recovery (from remote site or public cloud) at 73%; and enabling storage capacity expansion without disruption at 72%.
Business continuity was found to be a key storage concern, whether on-premise or in the cloud. It is first on the list for the primary capability that respondents would like from their storage infrastructure, and was also number one in the previous DataCore market survey. Additionally, business continuity is the top business driver for those deploying public and hybrid cloud storage (46% and 41%), and similarly ranks high in the complete results for software-defined and hyperconverged storage business drivers, coming in at45% and 43% respectively.
Surprises, False Starts and Technology Disappointments
The biggest surprise reported was that there is still too much vendor lock-in within storage, with 42% of respondents noting this as their top concern. Software-defined storage is being used to solve this (management of heterogeneous environments) as well as for automation (lowering costs, fewer migrations and less work provisioning). Therefore, it should not be a surprise that the results also showed adoption of software-defined storage is about double that of hyperconverged (37% vs. 21%), with 56% of respondents also strongly considering or planning to consider software-defined storage in the next 12 months.
The survey further revealed the reality of hyperconverged deployments. While it continues to make inroads, in addition to above, some respondents also said they are ruling out hyperconverged because it does not integrate with existing systems (creates silos), can’t scale compute and storage independently and is too expensive. Hybrid-converged technology is a good option for IT to consider in these cases.
Additionally, while all-flash arrays are often viewed as the simplest way to add performance, more than 17% of survey respondents found that adding flash failed to deliver on the performance promise—most likely as flash does not solve the I/O bottlenecks pervasive in most enterprises. Technologies such as Parallel I/O provide an effective solution for this.
In regards to emerging technologies, many enterprises are exploring containers, however actual adoption is slow primarily due to: lack of data management and storage tools; application performance slowdowns—especially for databases and other tier-1 applications; and lack of ways to deal with applications such as databases that need persistent storage. NVMe is also still struggling to become mainstream. About half of respondents have not adopted NVMe at all. Thirty percent of survey respondents report that 10% or more of their storage is NVMe, and more than 7% report that more than half of their storage is NVMe.
While adoption is still slow, enthusiasm for the technology does appear strong. Technologies such as software-defined storage with Gen6 HBA support and dynamic auto-tiering with NVMe on a DAS can help simplify and accelerate adoption.
“We see enterprise IT maturing in its use of software-defined technologies as the foundation for the modern data centre,” said Gerardo A. Dada, chief marketing officer at DataCore. “DataCore is delighted to be a catalyst that helps IT meet business expectations of availability and performance while reducing costs, as well as enabling users to enjoy architectural flexibility and vendor freedom.”
Additional highlights of “The State of Software-Defined, Hyperconverged and Cloud Storage”market survey include:
Worldwide spending on security-related hardware, software, and services is forecast to reach $133.7 billion in 2022, according to an new update to the Worldwide Semiannual Security Spending Guide from International Data Corporation (IDC). Although spending growth is expected to gradually slow over the 2017-2022 forecast period, the market will still deliver a compound annual growth rate (CAGR) of 9.9%. As a result, security spending in 2022 will be 45% greater than the $92.1 billion forecast for 2018.
"Privacy has grabbed the attention of Boards of Directors as regions look to implement privacy regulation and compliance standards similar to GDPR. Frankly, privacy is the new buzz word and the potential impact is very real. The result is that demand to comply with such standards will continue to buoy security spending for the foreseeable future," said Frank Dickson, research vice president, Security Products.
Security-related services will be both the largest ($40.2 billion in 2018) and the fastest growing (11.9% CAGR) category of worldwide security spending. Managed security services will be the largest segment within the services category, delivering nearly 50% of the category total in 2022. Integration services and consulting services will be responsible for most of the remainder. Security software is the second-largest category with spending expected to total $34.4 billion in 2018. Endpoint security software will be the largest software segment throughout the forecast period, followed by identity and access management software and security and vulnerability management software. The latter will be the fastest growing software segment with a CAGR of 10.7%. Hardware spending will be led by unified threat management solutions, followed by firewall and content management.
Banking will be the industry making the largest investment in security solutions, growing from $10.5 billion in 2018 to $16.0 billion in 2022. Security-related services, led by managed security services, will account for more than half of the industry's spend throughout the forecast. The second and third largest industries, discrete manufacturing and federal/central government ($8.9 billion and $7.8 billion in 2018, respectively), will follow a similar pattern with services representing roughly half of each industry's total spending. The industries that will see the fastest growth in security spending will be telecommunications (13.1% CAGR), state/local government (12.3% CAGR), and the resource industry (11.8% CAGR).
"Security remains an investment priority in every industry as companies seek to protect themselves from large scale cyber attacks and to meet expanding regulatory requirements," said Eileen Smith, program director, Customer Insights and Analysis. "While security services are an important part of this investment strategy, companies are also investing in the infrastructure and applications needed to meet the challenges of a steadily evolving threat environment."
The United States will be largest geographic market for security solutions with total spending of $39.3 billion this year. The United Kingdom will be the second largest geographic market in 2018 at $6.1 billion followed by China ($5.6 billion), Japan ($5.1 billion), and Germany ($4.6 billion). The leading industries for security spending in the U.S. will be discrete manufacturing and the federal/central government. In the UK, banking and discrete manufacturing will deliver the largest security spending while telecommunications and banking will be the leading industries in China. China will see the strongest spending growth with a five-year CAGR of 26.6%. Malaysia and Singapore will be the second and third fastest growing regions with CAGRs of 21.1% and 18.2%, respectively.
From a company size perspective, large and very large businesses (those with more than 500 employees) will be responsible for nearly two thirds of all security-related spending in 2018. Large (500-999 employees) and medium businesses (100-499 employees) will see the strongest spending growth over the forecast, with CAGRs of 11.8% and 10.0% respectively. However, very large businesses (more than 1,000 employees) will grow nearly as fast with a five-year CAGR of 10.1%. Small businesses (10-99 employees) will also experience solid growth (8.9% CAGR) with spending expected to be more than $8.0 billion in 2018.
CA Veracode has released the latest State of Software Security (SOSS) report. The study includes promising signs that DevSecOps is facilitating better security and efficiency, and provides the industry with the company’s first look at flaw persistence analysis, which measures the longevity of flaws after first discovery.
In every industry, organisations are dealing with a massive volume of open flaws to address, and they are showing improvement in taking action against what they find. According to the report, 69 percent of flaws discovered were closed through remediation or mitigation, an increase of nearly 12 percent since the previous report. This shows organisations are gaining prowess in closing newly discovered vulnerabilities, which hackers often seek to exploit.
Despite this progress, the new SOSS report also shows that the number of vulnerable apps remains staggeringly high, and open source components continue to present significant risks to businesses. More than 85 percent of all applications contain at least one vulnerability following the first scan, and more than 13 percent of applications contain at least one very high severity flaw. In addition, organisations’ latest scan results indicate that one in three applications were vulnerable to attack through high or very high severity flaws.
An examination of fix rates across 2 trillion lines of code shows that companies face extended application risk exposure due to persisting flaws:
“Security-minded organisations have recognised that embedding security design and testing directly into the continuous software delivery cycle is essential to achieving the DevSecOps principles of balance of speed, flexibility and risk management. Until now, it’s been challenging to pinpoint the benefits of this approach, but this latest State of Software Security report provides hard evidence that organisations with more frequent scans are fixing flaws more quickly,” said Chris Eng, Vice President of Research, CA Veracode. “These incremental improvements amount over time to a significant advantage in competitiveness in the market and a huge drop in risk associated with vulnerabilities.”
Regional Differences in Flaw Persistence
While data from U.S. organisations dominate the sample size, this year’s report offers insights into differences by region in how quickly vulnerabilities are being addressed.
The UK was among the strongest performing regions: businesses here closed the first 25 percent of their flaws in just 11 days, second fastest among all regions, closed 50 percent of flaws in 72 days and closed 75 percent of flaws in 304 days. These marks outpaced averages across regions. Companies in Asia Pacific (APAC) are the quickest to remediate, closing out 25 percent of their flaws in about 8 days, followed by 22 days for the Americas and 28 days for those in Europe and the Middle East (EMEA). However, companies in the U.S. and the Americas caught up, closing out 75 percent of flaws by 413 days, far ahead of those in APAC and EMEA. In fact, it took more than double the average time for EMEA organisations to close out three-quarters of their open vulnerabilities. Troublingly, 25 percent of vulnerabilities in organisations in EMEA persisted more than two-and-a-half years after discovery
Data Supports DevSecOps Practices
In its third consecutive year documenting DevSecOps practices, the SOSS analysis shows a strong correlation between high rates of security scanning and lower long-term application risks, presenting significant evidence for the efficacy of DevSecOps. CA Veracode’s data on flaw persistence shows that organisations with established DevSecOps programs and practices greatly outperform their peers in how quickly they address flaws. The most active DevSecOps programs fix flaws more than 11.5 times faster than the typical organisation, due to ongoing security checks during continuous delivery of software builds, largely the result of increased code scanning. The data shows a very strong correlation between how many times a year an organisation scans and how quickly they address their vulnerabilities.
Open Source Components Continue to Thwart Enterprises
In prior SOSS reports, data has shown that vulnerable open source software components run rampant within most software. The current SOSS report found that most applications were still rife with flawed components, though there has been some improvement on the Java front. Whereas last year about 88 percent of Java applications had at least one vulnerability in a component, it fell to just over 77 percent in this report. As organisations tackle bug-ridden components, they should consider not just the open flaws within libraries and frameworks, but also how they are using those components. By understanding not just the status of the component, but whether or not a vulnerable method is being called, organisations can pinpoint their component risk and prioritise fixes based on the riskiest uses of components.
72 per cent recognise the benefits of analytics but only 39 per cent use it to inform strategy.
Nearly three-quarters of organisations (72 per cent) claim that analytics helps them generate valuable insight and 60 per cent say their analytics resources have made them more innovative, according to research commissioned by SAS, the leader in analytics.
That is despite only four in 10 (39 per cent) saying that analytics is core to their business strategy. A third of respondents (35 per cent) report that it is used for tactical projects only. Despite acknowledged value – and most (65 per cent) can quantify this - businesses are not getting the most out of their analytics investments.
However, they are now pursuing rapid analytical insight as a priority as they push into emerging technologies like artificial intelligence (AI) and the internet of things (IoT).
The research, “Here and Now: The need for an analytics platform”, surveyed analytics experts, and IT and line-of-business professionals in a wide range of industries around the world. It found that analytics is changing the way companies do business. This does not just apply to day-to-day operations as it’s also driving innovation - more than a quarter (27 per cent) say analytics has helped launched new business models. There are many identified benefits of an analytics platform, the most common being less time spent on data preparation (46 per cent), smarter and more confident decision-making (42 per cent) and faster time-to-insights (41 per cent).
“The findings show a strong desire in the business community to boost competitive insight and efficiency using analytics,” said Adrian Jones, Director of SAS’ Global Technology Practice. “The majority recognise that effective analytics could benefit their organisations, particularly as they develop their ability to deploy cutting-edge AI. But the number of those effectively using analytics strategically across the organisation could be much higher.”
The survey underscored a lack of alignment in the skills and leadership needed to maximise the potential of analytics. Many companies struggle to manage multiple analytics tools and data management processes.
“If they are to achieve success, organisations must put analytics at the heart of strategic planning and empower analytics resources to drive innovation using a unified analytics platform,” said Jones.
Views differ on the role of an analytics platform: most (61 per cent) believe it’s to extract insight and value from data, but many are split on its other purposes or benefits, such as better governance over data, predictive models and open source technology. Fifty-nine per cent believe another role of an analytics platform is to have an integrated or centralised data framework, while 43 per cent believe it’s to provide modelling and algorithms for AI and machine learning.
The responses suggest companies know analytics can help them, but they lack a clear and common understanding of the benefits of using a platform approach across the enterprise and the analytics lifecycle. It would explain why few organisations have a suitable platform in place according to results from SAS’ Enterprise AI Promise Study announced at Analytics Experience Amsterdam last year. This revealed only a quarter (24 per cent) of businesses felt they had the right infrastructure in place for AI, while the majority (53 per cent) felt they either needed to update and adapt their current platform or had no specific platform in place to address AI.
Despite the wide variety of uses for analytics, confidence in the end result is high. Respondents on average have 70 per cent confidence that they can derive business value from their data through analytics. Those that invest in data science talent are more likely to see ROI: confidence rises to 72 per cent for those in analytics roles but drops to 65 per cent for standard IT teams.
The same is true when considering the future. Analytics teams are more confident (66 per cent) of their ability to scale to meet future analytics workloads, compared to those in standard IT roles (59 per cent).
“When we speak with business leaders who are scaling up to use analytics and AI strategically, challenges they commonly identify are the need for an enterprise analytics platform and access to talent with data science and analytics skills,” said Randy Guard, Executive Vice President and Chief Marketing Officer at SAS.
“With AI now top-of-mind for many organisations, it’s more important than ever to have a powerful, streamlined analytics capability,” said Guard. “AI can only be as effective as the analytics behind it, and as analytical workloads increase, a comprehensive platform strategy is the best way to ensure success at scale.”
Today's announcement was made at the Analytics Experience conference in Milan, a business technology conference presented by SAS that brings together thousands of attendees on-site and online to share ideas on critical business issues.
As companies aggressively invest in a future driven by intelligence – rather than just more analytics – business and IT decision-makers are increasingly frustrated by the complexity, bottlenecks and uncertainty of today’s enterprise analytics, according to a survey of senior leaders at enterprise-sized organizations from around the world.
The survey, conducted by independent technology market research firm Vanson Bourne on behalf of Teradata, found significant roadblocks for enterprises looking to use intelligence across the organization. Many senior leaders agree that, while they are buying analytics, those investments aren’t necessarily resulting in the answers they are seeking. They cited three foundational challenges to making analytics more pervasive in their organization:
1) Analytics technology is too complex: Just under three quarters (74 percent) of senior leaders said their organization’s analytics technology is complex, with 42 percent of those saying analytics is not easy for their employees to use and understand.
2) Users don’t have access to all the data they need: 79 percent of respondents said they need access to more company data to do their job effectively.
3) “Unicorn” data scientists are a bottleneck: Only 25 percent said that, within their global enterprise, business decision-makers have the skills to access and use intelligence from analytics without the need for data scientists.
“The largest and most well-known companies in the world have collectively invested billions of dollars in analytics, but all that time and money spent has been met with mediocre results,” said Martyn Etherington, Teradata’s Chief Marketing Officer. “Companies want pervasive data intelligence, encompassing all the data, all the time, to find answers to their toughest challenges. They are not achieving this today, thus the frustration with analytics is palpable.”
Overly Complex Analytics Technology
The explosion of technologies for collecting, storing and analyzing data in recent years has added a significant level of often paralyzing complexity. The primary reason, cited in the survey, is that generally technology vendors don’t spend enough time making their products easy for all employees to use and understand; this problem is further exasperated by the recent surge and adoption of open source tools.
·About three quarters (74 percent) of respondents whose organization currently invests in analytics said that the analytics technology is complex.
·Nearly one out of three (31 percent) say that not being able to use analytics across the whole business is a negative impact of this complexity.
·Nearly half (46 percent) say that analytics isn’t really driving the business because there are too many questions and not enough answers.
·Over half (53 percent) agree that their organization is actually overburdened by the complexity of analytics.
·One of the main drivers of this complexity is that the technology isn’t easy for all employees to use/understand (42 percent).
Limited Access to Data
The survey further found that users need access to more data to do their jobs effectively. Decision-makers and users understand that more data often leads to better decisions, but too often, a lack of access to all the necessary data is a significant limiting factor for analytics success. According to the survey, decision-makers are missing nearly a third of the information they need to make informed decisions, on average – an unacceptable gap that can mean the difference between market leadership and failure.
·79 percent of senior leaders said they need access to more company data to do their job effectively.
oOn average, respondents said they are missing nearly a third (28 percent) of the data they need to do their job effectively.
·81 percent agree that they would like analytics to be more pervasive in their organization.
·More than half (54 percent) of respondents said their organization’s IT department is using analytics, compared to under a quarter (23 percent) who said that C-suite and board level are doing so.
Not Enough Data Scientists
Finally, “unicorn” data scientists remain a bottleneck, preventing pervasive intelligence across the organization. Respondents see this as an issue and connect the problem to the challenge of using complex technologies. To combat it, the vast majority say that they are investing, or plan to invest, in easier-to-use technology, as well as in training to enhance the skills of users.
·Only 25 percent of companies said their business decision-makers have the skills to access and use intelligence from analytics without the need for data scientists.
·Nearly two thirds (63 percent) of respondents from organizations that currently invest in analytics agreed that it is difficult for non-analytics workers to consume analytics within their organization.
·In 75 percent of respondents’ companies, data scientists are needed to help business decision-makers extract intelligence from analytics.
·To reduce this over-reliance on data scientists, 94 percent of respondents’ businesses where data scientists are currently needed are investing, or plan to invest, in training to enhance the skills of users; while 91 percent are investing, or plan to invest, in easier-to-use technology.
Automation enables agility, increases speed, and drives top-line revenue while freeing up resources and budgets to focus on more strategic tasks
CA Technologies has published the results from its landmark research conducted by analyst firm Enterprise Management Associates (EMA), “The State of Automation”. The study reveals the global state of automation maturity across organisations, highlights the increasing adoption of automation to drive business success, and demonstrates that automation is becoming the backbone for modern business. Organisations who do not embrace modern business automation will flounder and struggle to survive.
Automation drives agility, speed & revenue
The report illustrates automation’s impact on businesses’ strategies. These results provide business decision makers with a real-world understanding of the rate at which their industries are becoming automated, and its overall effect on productivity and revenue growth.
According to the report:
The successful implementation of automation also varies widely across industries, with retail showing the highest level of maturity (70 percent) and manufacturing the least mature (35 percent).
Anticipating Automation’s Future with Artificial Intelligence and Machine Learning
Businesses believe that artificial intelligence (AI) will only further increase automation’s value to their processes:
“As companies continue to embrace AI and automation, it’s imperative that different industries and departments understand their respective priorities and bottlenecks in adopting and implementing these technologies,” said Dan Twing, president and COO at EMA.
Additional key findings from the report include:
The research also indicates that many organisations have struggled with a large number of task-specific tools, and many others have the benefit of the experience of those who have gone before them as they start down the path to automation. Participants stated that they are looking to broader, multipurpose automation tools, such as workload automation (WLA) on the IT side of the house, or robotic process automation (RPA) for business processes. Both approaches are broad, and while WLA is generally thought of as an IT operations tool, the research demonstrates that it is now increasingly being applied to business processes.
The RPA Conundrum
According to the research, many participants are using RPA on a limited basis for some IT process automation. While 88 percent of respondents have deployed RPAs, or plan to deploy, RPA usage is very low, with most organizations having less than 100 bots. Thirty-five percent of respondents believe RPA will become more important as it becomes AI-enabled.
Responding to this, CA Technologies is the only vendor that supports clients as they work to get their bots smarter, supporting their need to keep up with the speed of the business, which requires continuous updates. With new integrations with leading RPA vendors, CA Continuous Delivery gives customers the business agility to deploy bots faster into production.
“This research shows that business automation is clearly recognised as a key enabler for business success. Organisations that fail to embrace modern automation solutions risk becoming industry laggards. Never has the phrase ‘Automate or Die’ become more relevant,” said Kurt Sand, general manager, CA Technologies. “CA offers enterprises the reliable, secure solutions needed to enable a business to become efficiently automated.”
European spending on mobility solutions is forecast to reach $293 billion in 2018, according to a new update to the Worldwide Semiannual Mobility Spending Guide from International Data Corporation (IDC). IDC expects the market to post a five-year compound annual growth rate (CAGR) of 2.4%, leading to spending in excess of $325 billion in 2022.
Western Europe (WE) accounted for 78% of total mobility spending in 2017 and will remain by far the largest contributor in the wider European region, with a CAGR of 2.7% for the 2017–2022 period. At the same time, mobility expenditure in Central and Eastern Europe (CEE) will grow at 1.2%.
Mobility plays a central role in enterprises' digital strategies for an agile business, increased competitiveness, and greater customer engagement. New mobility use cases and technology adoption among enterprises is driving growth in all market segments from devices to software and services. Nevertheless, security and compliance are top issues for large mobility projects. According to Dusanka Radonicic, senior research analyst with the Telecoms & Networking Group in IDC Central and Eastern Europe, Middle East, and Africa (CEMA), increased data usage is driving growth in mobile business services spending, with business-specific applications dependent on high-speed data connectivity.
Consumers will account for around 65% of European mobility spending throughout the forecast, with slightly more than half of this amount going toward mobile connectivity services, and most of the remainder going toward devices, mainly smartphones.
Banking is forecast to be the European industry leader for mobility spending in 2018, followed by discrete manufacturing and professional services. More than half of mobility spending among businesses will go to mobile connectivity services and enterprise mobility services. Mobile devices will be the second-largest area of spending, followed by still considerable investments in mobile applications and mobile application development platforms. The industries that will see the largest mobility spending growths over the forecast period are utilities, state/local government and banking with CAGRs above 6%.
From a technology perspective, mobility services will be the largest area of spending throughout the forecast period, surpassing $190 billion in 2022. Within the services component, mobile connectivity services will continue to represent the largest portion, albeit with relatively flat growth, while enterprise mobility services will post a CAGR exceeding 14%. Hardware will be the second largest technology category, with spending forecast to reach $129 billion in 2022. Despite being the smallest technology category, software will see strong spending growth over the forecast period (12.1% CAGR), driven by mobile enterprise applications. Businesses will also increase their development efforts with mobile application development platforms recording a five-year CAGR of 18.4%, making it the fastest-growing technology category overall. Enterprise mobility management and mobile enterprise security are the most mature segments in the software category, with expected CAGRs of approximately 8% reflecting that status.
Western Europe represents more than three-quarters of the European market and is forecast to account for approximately 79% of total value in 2022. "Western European companies now have a clear understanding of the benefit mobility technologies can deliver and are evolving their approach — focusing on applications integrated with enterprise systems," says Gabriele Roberti, research manager for IDC EMEA Customer Insights and Analysis. "In customer-centric industries, like financial services and retail, mobility is becoming a pivotal part of digital transformation." Among Western European countries, the United Kingdom, Germany, and France remain the top spenders on mobility technologies, accounting for more than half of the regional market.
“Central and Eastern European companies are adapting their business models to integrate mobility technologies, seeking more flexible company structures to foster innovation, improving the infrastructure and connectivity, and mitigating the security concerns,” says Petr Vojtisek, research analyst for IDC CEMA Customer Insights and Analysis. Among CEE countries, Russia remains the single biggest market, accounting for over 38% of overall European mobility spending, followed by the cluster of countries referred to as the Rest of CEE (Slovakia, Ukraine, and Croatia), and by Poland. The fastest-growing countries will be jointly the Czech Republic and Romania, each with a spending CAGR of around 1.8% throughout the forecast period.
Pan EU survey results point to increased appetite to shift physical security systems to the cloud to access big data insights and meet business objectives more readily.
A new study of 1500 IT decision makers across Europe into the attitudes towards and behaviours behind cloud adoption has provided some revelatory insight that will serve the physical security industry. The survey, conducted by Morphean, a Video Surveillance-as-a-Service (VSaaS) innovator, highlights not only a favourable shift towards cloud but also a need to adopt technology to extract the intelligent insights needed to accelerate business growth.
Respondents from organisations with 25 employees and above from the UK, France and Germany were asked to share their views on cloud technologies, cloud security, future cloud investment and new areas of cloud growth. The results showed that while nearly 9 out of 10 businesses surveyed are already using cloud-based software solutions, 89% of respondents would possibly or definitely move physical security technology such as video surveillance and access control to the cloud. Furthermore, 92% felt it to be important or very important that their physical security solutions meet their overall business objectives.
Key survey findings include:
International Data Corporation's (IDC) EMEA Server Tracker shows that in the second quarter of 2018, the EMEA server market reported a year-on-year (YoY) increase in vendor revenues of 29.0% to $4.1 billion, with a YoY decline of 3.7% in units shipped to 515,000.
From a euro standpoint, 2Q18 EMEA server revenues increased 18.7% YoY to €3.4 billion. The top 5 vendors in the region and their quarterly revenues are displayed in the table below.
Top 5 EMEA Vendor Revenues ($M)
2Q17 Server Revenue
2Q17 Market Share
2Q18 Server Revenue
2Q18 Market Share
2Q17/2Q18 Revenue Growth
Source: IDC Quarterly Server Tracker, 2Q18
Reviewing the quarter at a product level, standard rack optimized grew 29.0% YoY, led by a strong performance in the U.K. and Germany. Standard multinode shipments grew 251.4% YoY, driven largely by the U.K., Germany, and the Netherlands. Custom multinode revenues remained strong, increasing 87.3% YoY. Higher average selling prices across standard and custom servers have driven improved revenues over the quarter. Large systems also performed well, with shipments increasing 48.6% YoY due to major refreshes in Denmark, Italy, France, and Germany.
"The second quarter of 2018 saw the first shipments of EPYC processors in Western Europe. Expectations are that these processors will align more closely to Moore's Law, a factor that will only drive faster adoption in the future," said Eckhardt Fischer, senior research analyst, IDC Western Europe.
"ODM growth has been driven by datacenter buildout of several hyperscale public cloud providers — AWS, Microsoft, Google — in Western Europe. This growth has slowed down somewhat in recent quarters," said Kamil Gregor, senior research analyst, IDC Western Europe. "France is the major exception, with both AWS and Microsoft opening new datacenters around Paris and Marseille. The hyperscalers now focus on diversifying the portfolio of EMEA countries with datacenter presence, which will translate into significant ODM growth in new geographies such as Austria."
Segmenting market performance at a Western European level, many major countries experienced overall shipment declines accompanied by significant increases in average selling prices. As Western Europe's largest market during the quarter, Germany experienced a 14.0% decline in shipments with a 23.2% increase in revenues. The U.K. saw an 8.7% shipment decline and a 29.9% revenue increase, which helped propel overall Western European revenues over the second quarter. U.K. growth was driven by revenue increases for all top 3 vendors, particularly in standard rack optimized and blade performance. Ireland saw a significant revenue boost from a large hyperscale deal, which drove Standard Multi Node revenues to $28.5 million. Spain was the only Western European country to see a revenue decline over the year, due to a major multinode deal in 2Q17 that significantly impacted the country's total spend.
"Central and Eastern Europe, the Middle East, and Africa server revenue increased by 23.3% year over year to $790.1 million in the second quarter of 2018," said Jiri Helebrand, research manager, IDC CEMA. "Strong server sales were primarily the result of undergoing refresh cycles and ODM server shipments. The Central and Eastern Europe subregion grew 33.7% year over year with revenue of $413.7 million. Romania, Poland, and Czech Republic recorded the strongest growth, supported by a solid economic situation in the region and interest in investing to update datacenter infrastructure. The Middle East and Africa subregion grew 13.6% year over year to $376.5 million in 2Q18, somewhat slower compared with WE and CEE, partly due to seasonality and challenges due to weakening local currencies. Qatar was the top performer, benefitting from a large public deal. South Africa and Egypt also recorded solid growth driven by investments in new IT infrastructure to support next-generation applications."
Modular server category: Server form factors have been amended to include the new "modular" category that encompasses today's blade servers and density-optimized servers (which are being renamed multinode servers). As the differentiation between these two types of servers continues to become blurred, IDC is moving forward with the "modular server" category as it better reflects the directions in which vendors and the entire market are moving when it comes to server design.
Multinode (density-optimized) servers: Modular platforms that do not meet IDC's definition of a blade are classified as multinode. This was formerly called density optimized in IDC's server research and server-related tracker products.
Analyst firm Gartner has been busy in recent weeks, producing a range of reports around its annual Gartner Symposium/ITxpo. Technology trends, possible digital disruptions and predictions all come under the spotlight.
Gartner has highlighted the top strategic technology trends that organizations need to explore in 2019. Gartner defines a strategic technology trend as one with substantial disruptive potential that is beginning to break out of an emerging state into broader impact and use, or which are rapidly growing trends with a high degree of volatility reaching tipping points over the next five years.
“The Intelligent Digital Mesh has been a consistent theme for the past two years and continues as a major driver through 2019. Trends under each of these three themes are a key ingredient in driving a continuous innovation process as part of a ContinuousNEXT strategy,” said David Cearley, vice president and Gartner Fellow. “For example, artificial intelligence (AI) in the form of automated things and augmented intelligence is being used together with IoT, edge computing and digital twins to deliver highly integrated smart spaces. This combinatorial effect of multiple trends coalescing to produce new opportunities and drive new disruption is a hallmark of the Gartner top 10 strategic technology trends for 2019.”
The top 10 strategic technology trends for 2019 are:
Autonomous things, such as robots, drones and autonomous vehicles, use AI to automate functions previously performed by humans. Their automation goes beyond the automation provided by rigid programing models and they exploit AI to deliver advanced behaviors that interact more naturally with their surroundings and with people.
“As autonomous things proliferate, we expect a shift from stand-alone intelligent things to a swarm of collaborative intelligent things, with multiple devices working together, either independently of people or with human input,” said Mr. Cearley. “For example, if a drone examined a large field and found that it was ready for harvesting, it could dispatch an “autonomous harvester.” Or in the delivery market, the most effective solution may be to use an autonomous vehicle to move packages to the target area. Robots and drones on board the vehicle could then ensure final delivery of the package.”
Augmented analytics focuses on a specific area of augmented intelligence, using machine learning (ML) to transform how analytics content is developed, consumed and shared. Augmented analytics capabilities will advance rapidly to mainstream adoption, as a key feature of data preparation, data management, modern analytics, business process management, process mining and data science platforms. Automated insights from augmented analytics will also be embedded in enterprise applications — for example, those of the HR, finance, sales, marketing, customer service, procurement and asset management departments — to optimize the decisions and actions of all employees within their context, not just those of analysts and data scientists. Augmented analytics automates the process of data preparation, insight generation and insight visualization, eliminating the need for professional data scientists in many situations.
“This will lead to citizen data science, an emerging set of capabilities and practices that enables users whose main job is outside the field of statistics and analytics to extract predictive and prescriptive insights from data,” said Mr. Cearley. “Through 2020, the number of citizen data scientists will grow five times faster than the number of expert data scientists. Organizations can use citizen data scientists to fill the data science and machine learning talent gap caused by the shortage and high cost of data scientists.”
The market is rapidly shifting from an approach in which professional data scientists must partner with application developers to create most AI-enhanced solutions to a model in which the professional developer can operate alone using predefined models delivered as a service. This provides the developer with an ecosystem of AI algorithms and models, as well as development tools tailored to integrating AI capabilities and models into a solution. Another level of opportunity for professional application development arises as AI is applied to the development process itself to automate various data science, application development and testing functions. By 2022, at least 40 percent of new application development projects will have AI co-developers on their team.
“Ultimately, highly advanced AI-powered development environments automating both functional and nonfunctional aspects of applications will give rise to a new age of the ‘citizen application developer’ where nonprofessionals will be able to use AI-driven tools to automatically generate new solutions. Tools that enable nonprofessionals to generate applications without coding are not new, but we expect that AI-powered systems will drive a new level of flexibility,” said Mr. Cearley.
A digital twin refers to the digital representation of a real-world entity or system. By 2020, Gartner estimates there will be more than 20 billion connected sensors and endpoints and digital twins will exist for potentially billions of things. Organizations will implement digital twins simply at first. They will evolve them over time, improving their ability to collect and visualize the right data, apply the right analytics and rules, and respond effectively to business objectives.
“One aspect of the digital twin evolution that moves beyond IoT will be enterprises implementing digital twins of their organizations (DTOs). A DTO is a dynamic software model that relies on operational or other data to understand how an organization operationalizes its business model, connects with its current state, deploys resources and responds to changes to deliver expected customer value,” said Mr. Cearley. “DTOs help drive efficiencies in business processes, as well as create more flexible, dynamic and responsive processes that can potentially react to changing conditions automatically.”
The edge refers to endpoint devices used by people or embedded in the world around us. Edge computing describes a computing topology in which information processing, and content collection and delivery, are placed closer to these endpoints. It tries to keep the traffic and processing local, with the goal being to reduce traffic and latency.
In the near term, edge is being driven by IoT and the need keep the processing close to the end rather than on a centralized cloud server. However, rather than create a new architecture, cloud computing and edge computing will evolve as complementary models with cloud services being managed as a centralized service executing, not only on centralized servers, but in distributed servers on-premises and on the edge devices themselves.
Over the next five years, specialized AI chips, along with greater processing power, storage and other advanced capabilities, will be added to a wider array of edge devices. The extreme heterogeneity of this embedded IoT world and the long life cycles of assets such as industrial systems will create significant management challenges. Longer term, as 5G matures, the expanding edge computing environment will have more robust communication back to centralized services. 5G provides lower latency, higher bandwidth, and (very importantly for edge) a dramatic increase in the number of nodes (edge endoints) per square km.
Conversational platforms are changing the way in which people interact with the digital world. Virtual reality (VR), augmented reality (AR) and mixed reality (MR) are changing the way in which people perceive the digital world. This combined shift in perception and interaction models leads to the future immersive user experience.
“Over time, we will shift from thinking about individual devices and fragmented user interface (UI) technologies to a multichannel and multimodal experience. The multimodal experience will connect people with the digital world across hundreds of edge devices that surround them, including traditional computing devices, wearables, automobiles, environmental sensors and consumer appliances,” said Mr. Cearley. “The multichannel experience will use all human senses as well as advanced computer senses (such as heat, humidity and radar) across these multimodal devices. This multiexperience environment will create an ambient experience in which the spaces that surround us define “the computer” rather than the individual devices. In effect, the environment is the computer.”
Blockchain, a type of distributed ledger, promises to reshape industries by enabling trust, providing transparency and reducing friction across business ecosystems potentially lowering costs, reducing transaction settlement times and improving cash flow. Today, trust is placed in banks, clearinghouses, governments and many other institutions as central authorities with the “single version of the truth” maintained securely in their databases. The centralized trust model adds delays and friction costs (commissions, fees and the time value of money) to transactions. Blockchain provides an alternative trust mode and removes the need for central authorities in arbitrating transactions.
”Current blockchain technologies and concepts are immature, poorly understood and unproven in mission-critical, at-scale business operations. This is particularly so with the complex elements that support more sophisticated scenarios,” said Mr. Cearley. “Despite the challenges, the significant potential for disruption means CIOs and IT leaders should begin evaluating blockchain, even if they don’t aggressively adopt the technologies in the next few years.”
Many blockchain initiatives today do not implement all of the attributes of blockchain — for example, a highly distributed database. These blockchain-inspired solutions are positioned as a means to achieve operational efficiency by automating business processes, or by digitizing records. They have the potential to enhance sharing of information among known entities, as well as improving opportunities for tracking and tracing physical and digital assets. However, these approaches miss the value of true blockchain disruption and may increase vendor lock-in. Organizations choosing this option should understand the limitations and be prepared to move to complete blockchain solutions over time and that the same outcomes may be achieved with more efficient and tuned use of existing nonblockchain technologies.
A smart space is a physical or digital environment in which humans and technology-enabled systems interact in increasingly open, connected, coordinated and intelligent ecosystems. Multiple elements — including people, processes, services and things — come together in a smart space to create a more immersive, interactive and automated experience for a target set of people and industry scenarios.
“This trend has been coalescing for some time around elements such as smart cities, digital workplaces, smart homes and connected factories. We believe the market is entering a period of accelerated delivery of robust smart spaces with technology becoming an integral part of our daily lives, whether as employees, customers, consumers, community members or citizens,” said Mr. Cearley.
Digital Ethics and Privacy
Digital ethics and privacy is a growing concern for individuals, organizations and governments. People are increasingly concerned about how their personal information is being used by organizations in both the public and private sector, and the backlash will only increase for organizations that are not proactively addressing these concerns.
“Any discussion on privacy must be grounded in the broader topic of digital ethics and the trust of your customers, constituents and employees. While privacy and security are foundational components in building trust, trust is actually about more than just these components,” said Mr. Cearley. “Trust is the acceptance of the truth of a statement without evidence or investigation. Ultimately an organization’s position on privacy must be driven by its broader position on ethics and trust. Shifting from privacy to ethics moves the conversation beyond ‘are we compliant’ toward ‘are we doing the right thing.’”
Quantum computing (QC) is a type of nonclassical computing that operates on the quantum state of subatomic particles (for example, electrons and ions) that represent information as elements denoted as quantum bits (qubits). The parallel execution and exponential scalability of quantum computers means they excel with problems too complex for a traditional approach or where a traditional algorithms would take too long to find a solution. Industries such as automotive, financial, insurance, pharmaceuticals, military and research organizations have the most to gain from the advancements in QC. In the pharmaceutical industry, for example, QC could be used to model molecular interactions at atomic levels to accelerate time to market for new cancer-treating drugs or QC could accelerate and more accurately predict the interaction of proteins leading to new pharmaceutical methodologies.
“CIOs and IT leaders should start planning for QC by increasing understanding and how it can apply to real-world business problems. Learn while the technology is still in the emerging state. Identify real-world problems where QC has potential and consider the possible impact on security,” said Mr. Cearley. “But don’t believe the hype that it will revolutionize things in the next few years. Most organizations should learn about and monitor QC through 2022 and perhaps exploit it from 2023 or 2025.”
Gartner has also revealed seven digital disruptions that organizations may not be prepared for. These include several categories of disruption, each of which represents a significant potential for new disruptive companies and business models to emerge.
“The single largest challenge facing enterprises and technology providers today is digital disruption,” said Daryl Plummer, vice president and Gartner Fellow. “The virtual nature of digital disruptions makes them much more difficult to deal with than past technology-triggered disruptions. CIOs must work with their business peers to pre-empt digital disruption by becoming experts at recognizing, prioritizing and responding to early indicators.”
Gartner analysts presented seven key digital disruptions CIOs may not seeing coming during Gartner Symposium/ITxpo, which is taking place here through Thursday.
Quantum computing (QC) is a type of nonclassical computing that is based on the quantum state of subatomic particles. Classic computers operate using binary bits where the bit is either 0 or 1, true or false, positive or negative. However, in QC, the bit is referred to as a quantum bit or qubit. Unlike the strictly binary bits of classic computing, qubits can represent 1 or 0 or a superposition of both partly 0 and partly 1 at the same time.
Superposition is what gives quantum computers speed and parallelism, meaning that these computers could theoretically work on millions of computations at once. Further, qubits can be linked with other qubits in a process called entanglement. When combined with superposition, quantum computers could process a massive number of possible outcomes at the same time.
“Today’s data scientists, focused on machine learning (ML), artificial intelligence (AI) and data and analytics, simply cannot address some difficult and complex problems because of the compute limitations of classic computer architectures. Some of these problems could take today’s fastest supercomputers months or even years to run through a series of permutations, making it impractical to attempt,” said Mr. Plummer. “Quantum computers have the potential to run massive amounts of calculations in parallel in seconds. This potential for compute acceleration, as well as the ability to address difficult and complex problems, is what is driving so much interest from CEOs and CIOs in a variety of industries. But we must always be conscious of the hype surrounding the quantum computing model. QC is good for a specific set of problem solutions, not all general-purpose computing.”
Real-Time Language Translation
Real-time language translation could, in effect, fundamentally change communication across the globe. Devices such as translation earbuds and voice and text translation services can perform translation in real-time, breaking down language barriers with friends, family, clients and colleagues. This technology could not only disrupt intercultural language barriers, but also language translators as this role may no longer be needed.
“To prepare for this disruption, CIOs should equip employees in international jobs with experimental real-time translators to pilot streamlined communication,” said Mr. Plummer. “This will help establish multilingual disciplines to help employees work more effectively across languages.”
Nanotechnology is science, engineering and technology conducted at the nanoscale — 1 to 100 nanometers. The implications of this technology is that the creation of solutions involve individual atoms and molecules. Nanotech is used to create new effects in materials science, such as self-healing materials. Applications in medicine, electronics, security and manufacturing herald a world of small solutions that fill in the gaps in the macroverse in which we live.
“Nanotechnology is rapidly becoming as common a concept as many others, and yet still remains sparsely understood in its impact to the world at large,” said Mr. Plummer. “When we consider applications that begin to allow things like 3D printing at nanoscale, then it becomes possible to advance the cause of printed organic materials and even human tissue that is generated from individual stem cells. 3D bioprinting has shown promise and nanotech is helping deliver on it.”
Digital business will stretch conventional management methods past the breaking point. The enterprise will need to make decisions in real time about unpredictable events, based on information from many different sources (such as Internet of Things [IoT] devices) beyond the organization’s control. Humans move too slowly, stand-alone smart machines cost too much, and hyperscale architectures cannot deal with the variability. Swarm intelligence could tackle the mission at a low cost.
Swarm intelligence is the collective behavior of decentralized, self-organized systems, natural or artificial. A swarm consists of small computing elements (either physical entities or software agents) that follow simple rules for coordinating their activities. Such elements can be replicated quickly and inexpensively. Thus, a swarm can be scaled up and down easily as needs change. CIOs should start exploring the concept to scale management, especially in digital business scenarios.
Human-machine interface (HMI) offers solutions providers the opportunity to differentiate with innovative, multimodal experiences. In addition, people living with disabilities benefit from HMIs that are being adapted to their needs, including some already in use within organizations of all types. Technology will give some of these people “superabilities,” spurring people without disabilities to also employ the technology to keep up.
For example, electromyography (EMG) wearables allow current users who would be unable to do so otherwise to use smartphones and computers through the use of sensors that measure muscle activity. Muscular contraction generates electrical signals that can be measured from the skin surface. Sensors may be placed on a single part or multiple parts of the body, as appropriate to the individual. The gestures are in turn interpreted by a HMI linked to another device, such as a PC or smartphone. Wearable devices using myoelectric signals have already hit the consumer market and will continue migrating to devices intended for people with disabilities.
Software Distribution Revolution
Software procurement and acquisition is undergoing a fundamental shift. The way in which software is located, bought and updated is now in the province of the software distribution marketplace. With the continued growth of cloud platforms from Amazon Web Services (AWS), Microsoft, Google, IBM and others, as well as the ever-increasing introduction of cloud-oriented products and services, the role of marketplaces for selling and buying is gathering steam. The cloud platform providers realize (to varying degrees) that they must remove as much friction as possible in the buying and owning processes for both their own offerings and the offerings of their independent software vendors (ISVs) (i.e., partners). ISVs or cloud technology service providers (TSPs) recognize the need to reach large and increasingly diverse buying audiences.
“Establishing one’s own marketplace or participating as a provider in a third-party marketplace is a route to market that is becoming increasingly popular. Distributors and other third parties also see the opportunity to create strong ecosystems (and customer bases) while driving efficiencies for partners and technology service providers,” said Mr. Plummer.
The use of other devices, such as virtual personal assistants (VPAs), smartwatches and other wearables, may mean a shift in how people continue to use the smartphone.
“Smartphones are, today, critical for connections and media consumption. However, over time they will become less visible as they stay in pockets and backpacks. Instead, consumers will use a combination of voice-input and VPA technologies and other wearable devices to navigate a store or public space such as an airport or stadium without walking down the street with their eyes glued to a smartphone screen,” said Mr. Plummer.
CIOs and IT leaders should use wearability of a technology as a guiding principle and investigate and pilot wearable solutions to improve worker effectiveness, increase safety, enhance customer experiences and improve employee satisfaction.
Top predictions for IT organisations and users for 2019 and beyond
Gartner has revealed its top predictions for 2019 and beyond. Gartner’s top predictions examine three fundamental effects of continued digital innovation: artificial intelligence (AI) and skills, cultural advancement, and processes becoming products that result from increased digital capabilities and the emergence of continuous conceptual change in technology.
“As the advance of technology concepts continues to outpace the ability of enterprises to keep up, organizations now face the possibility that so much change will increasingly seem chaotic. But chaos does not mean there is no order. The key is that CIOs will need to find their way to identifying practical actions that can be seen within the chaos,” said Daryl Plummer, vice president and Gartner Fellow, Distinguished.
“Continuous change can be made into an asset if an organization sharpens its vision in order to see the future coming ahead of the change that vision heralds. Failing that, there must be a focus on a greater effort to see the need to shift the mindset of the organization. With either of these two methods, practical actions can be found in even the seemingly unrelated predictions of the future.”
Through 2020, 80 percent of AI projects will remain alchemy, run by wizards whose talents won’t scale widely in the organization.
In the last five years, the increasing popularity of AI techniques has facilitated the proliferation of projects across a wide number of organizations worldwide. However, change is still outpacing the production of competent AI professionals. When it comes to AI techniques, the needed talent is not only technically demanding, mathematically savvy data scientists to inventive data engineers, and rigorous operation research professionals to shrewd logisticians, are needed.
“The large majority of existing AI techniques talents are skilled at cooking a few ingredients, but very few are competent enough to master a few recipes — let alone invent new dishes,” said Mr. Plummer. “Through 2020, a large majority of AI projects will remain craftily prepared in artisan IT kitchens. The premises of a more systematic and effective production will come when organizations stop treating AI as an exotic cuisine and start focusing on business value first.”
By 2023, there will be an 80 percent reduction in missing people in mature markets compared with 2018 due to AI face recognition.
Over the next few years, facial matching and 3D facial imaging will become important elective aspects of capturing data about vulnerable populations, such as children and the elderly or people who are otherwise impaired. Such measures will reduce the number of missing people without adding large numbers of dramatic discoveries in large public crowds, which is the popularly imagined environment. The most important advances will take place with more robust image capture, image library development, image analysis strategy and public acceptance. Additionally, with improved on-device/edge AI capability on cameras, public and private sectors will be able to prefilter necessary image data instead of sending all video streams to cloud for processing.
By 2023, U.S. emergency department visits will be reduced by 20 million due to enrollment of chronically ill patients in AI-enhanced virtual care.
Clinician shortages, particularly in rural and some urban areas, are driving healthcare providers to look for new approaches to delivering care. In many cases, virtual care has shown it can offer care more conveniently and cost-effectively than conventional face-to-face care. Gartner research shows that successful use of virtual care helps control costs, improves quality of delivery and improves access to care. Without change, the traditionally rigid physical care delivery methods will increasingly render healthcare providers noncompetitive. This transition will not come easily, and will require modification of cultural attitudes and healthcare financial models.
By 2023, 25 percent of organizations will require employees to sign an affidavit to avoid cyberbullying, but 70 percent of these initiatives will fail.
To prevent actions that have a detrimental impact on the organization’s reputation, employers want to strengthen employee behavioral guidelines (such as anti-harassment and discrimination norms) when using social media. Signing an affidavit of agreement to refrain from cyberbullying is a logical next step. Alternatively, legacy code of conduct agreements should be updated to incorporate cyberbullying.
“However, cyberbullying isn’t stopped by signing an agreement; it’s stopped by changing culture,” said Mr. Plummer. “That culture change should include teaching employees how to recognize what cyberbullying is and provide a means of reporting it when they see it. Formulate realistic policies that balance deterrence measures and strict definition, regulation or behavior monitoring. Make sure employees understand why these measures are needed and how they benefit the organization and themselves.”
Through 2022, 75 percent of organizations with frontline decision-making teams reflecting diversity and an inclusive culture will exceed their financial targets.
Business leaders across all functions understand the positive business impact of diversity and inclusion (D&I). A key business requirement currently is the need for better decisions made fast at the lowest level possible, ideally at the frontline. To create inclusive teams, organizations need to move beyond obvious diversity cues such as gender and race, to seek out people with diverse work styles and thought patterns. A final key factor to ensure D&I initiatives directly contribute to business results is to manage scale and employee engagement with them. There are numerous technologies that can significantly enhance the scale and effectiveness of interventions to diagnose the current state of inclusion, develop leaders who foster inclusion and embed inclusion into daily business execution.
By 2021, 75 percent of public blockchains will suffer “privacy poisoning” — inserted personal data that renders the blockchain noncompliant with privacy laws.
Companies that implement blockchain systems without managing privacy issues by design run the risk of storing personal data that can’t be deleted without compromising chain integrity. A public blockchain is a pseudo-anarchic autonomous system such as the internet. Nobody can sue the internet, or make it accountable for the data being transmitted. Similarly, a public blockchain can’t be made accountable for the content it bears.
Any business operating processes using a public blockchain must maintain a copy of the entire blockchain as part of its systems of record. A public blockchain poisoned with personal data can’t be replaced, anonymized and/or structurally deleted from the shared ledger. Therefore, the business will be unable to resolve its needs to keep records with its obligations to comply with privacy laws.
By 2023, ePrivacy regulations will increase online costs by minimizing the use of “cookies” thus crippling the current internet ad revenue machine.
“None of the current nor future legislation will be a 100 percent prohibition on personalized ads. However, the legislation does cripple the current internet advertising infrastructure and the players within,” said Mr. Plummer. “The current ad revenue machine is an intricate overlapping of companies that are able to track individuals, compile personal data, analyze, predict and target advertisements. By interrupting the data flow, as well as causing some use to be illegal, the delicate balance of service and provision, that has been built-up over decades of free use of data, is at the very least, upset.”
Through 2022, a fast path to digital will be converting internal capabilities to external revenue-generating products using cloud economics and flexibility.
For years, internal IT organizations that developed unique capabilities wanted to take them to market to generate value for their organizations, but economic, technical and other issues did not allow this to happen. Cloud infrastructure and cloud services providers change all of these dynamics. Capacity for supporting scale of the application solution is the cloud provider’s responsibility. Market-dominant app stores take over distribution and aspects of marketing. Simpler, accessible cloud tools make the support and enhancement of applications as products easier. Cloud also shifts the impacts on internal financial statements from below the line to above the line areas. As more aggressive companies convert internal processes and data into marketable solutions and start to report digital revenue gains, other organizations will follow suit.
By 2022, companies leveraging the “gatekeeper” position of the digital giants will capture 40 percent global market share, on average, in their industry.
Global market share of the top four firms by industry fell by four percentage points between 2006 and 2014 as European firms, likely weakened and distracted by economic and monetary crises, lost market share to a rising group of emerging market firms, particularly those from China.
“We believe this is about to change as the powerful economics of scale and network effects assert themselves on a global basis. Innovations in digital technology are producing an ever greater density of connections, even more value from the intelligence captured across those connections (such as through machine learning and AI) and greater immediacy of exchange between connections (such as through faster data transmission speeds),” said Mr. Plummer.
But the path to achieving and sustaining a dominant market share position globally is likely to lead to and through one or more of the digital giants (Google, Apple, Facebook, Amazon, Baidu, Alibaba and Tencent) and their ecosystems. These giants already command massive consumer share and have begun to use their “gatekeeper” positions and infrastructure to enter the B2B space as well. They are keenly aware of the trend to connect, are leading the advance into capabilities such as AI, and have aggressively taken actions to invade the “physical world” and make it part of their digital world — a world they control.
Through 2021, social media scandals and security breaches will have effectively zero lasting consumer impact.
The core point of this prediction is that the benefits of using digital technologies outweigh potential, but unknown, future risks. Consumer adoption of digital technologies will continue to grow, and backlash of organizations taking technology too far will be short term.
“For the last five years, multiple issues have arisen every year, in each case leading to significant coverage in the media, but the ramifications have been minimal. A main reason is the lack of choice and competition,” said Mr. Plummer. “The ‘network effect,’ which makes it hard to switch to a different service because everybody is using the service, has proven to be very powerful. Even with a negative sentiment there has been no change so far. Why would the next years be different?”
Global IT spending to grow 3.2 percent in 2019
Worldwide IT spending is projected to total $3.8 trillion in 2019, an increase of 3.2 percent from expected spending of $3.7 trillion in 2018, according to the latest forecast by Gartner, Inc.
“While currency volatility and the potential for trade wars are still playing a part in the outlook for IT spending, it is the shift from ownership to service that is sending ripples through every segment of the forecast,” said John-David Lovelock, research vice president at Gartner. “What this signals, for example, is more enterprise use of cloud services — instead of buying their own servers, they are turning to the cloud. As enterprises continue their digital transformation efforts, shifting to ‘pay for use’ will continue. This sets enterprises up to deal with the sustained and rapid change that underscores digital business.”
Enterprise software spending is forecast to experience the highest growth with an 8.3 percent increase in 2019 (see Table 1). Software as a service (SaaS) is driving growth in almost all software segments, particularly customer relationship management (CRM), due to increased focus on providing better customer experiences. Cloud software will grow at more than 22 percent this year compared with 6 percent growth for all other forms of software. While core applications such as ERP, CRM and supply chain continue to get the lion share of dollars, security and privacy are of particular interest right now. Eighty-eight percent of recently surveyed global CIOs have deployed or plan to deploy cybersecurity software and other technology in the next 12 months.
Table 1. Worldwide IT Spending Forecast (Billions of U.S. Dollars)
2019 Growth (%)
Data Center Systems
Source: Gartner (October 2018)
Gartner analysts are discussing the emerging trends that are driving digital transformation and IT spending this week during Gartner Symposium/ITxpo here through Thursday.
In 2018, data center systems are expected to grow 6 percent, buoyed by a strong server market that saw spending growth of more than 10 percent over the last year, and in 2018 will come in at 5.7 percent growth. However, by 2019 servers will shift back to a declining market and drop 1 percent to 3 percent every year for the next five years. This, in turn, will impact overall data center systems spending as growth slows to 1.6 percent in 2019.
IT services will be a key driver for IT spending in 2019 as the market is forecast to reach $1 trillion in 2019, an increase of 4.7 percent from 2018. An expected global slowdown in economic prosperity, paired with internal pressures to cut spending, is driving organizations to optimize enterprise external spend for business services such as consulting. In a recent Gartner study, 46 percent of organizations indicated that IT services and supplier consolidation was in their top three most-effective cost-optimization approaches.
Worldwide spending for devices — PCs, tablets and mobile phones — is forecast to grow 2.4 percent in 2019, reaching $706 billion, up from $689 billion in 2018. Demand for PCs in the corporate sector has been strong, driven by Windows 10 PC hardware upgrades that should continue until 2020. However, the PC market may see some impact from the Intel CPU shortage. While this shortage will have some short-term impacts, Gartner does not expect any lasting impact on overall PC demand. The current expectation is that the shortage will continue into 2019, but Intel will prioritize the high-end CPU as well as the CPUs for business PCs. In the meantime, AMD will pick up the part of the market where Intel cannot supply CPUs.
“PCs, laptops and tablets have reached a new equilibrium state. These markets currently have stable demand from consumers and enterprises. Vendors have only subtle technology differentiation, which is pushing them to offer PC as a Service (PCaaS) in order to lock clients into multiyear recurring revenue streams and offer new bundles service options.” said Mr. Lovelock.
Digital business is maturing, from tentative experiment to application at massive scale. CIOs must evolve their thinking to be in tune with this new era of rapid increases in the scale of digital business. Gartner, Inc.’s annual global survey of CIOs showed that the CIO role will remain critical in transformation workflows.
The 2019 Gartner CIO Agenda survey gathered data from more than 3,000 CIO respondents in 89 countries and all major industries, representing approximately $15 trillion in revenue/public-sector budgets and $284 billion in IT spending.
The survey results show that digital business reached a tipping point this year. Forty-nine percent of CIOs report their enterprises have already changed their business models or are in the process of changing them.
“What we see here is a milestone in the transition to the third era of IT, the digital era,” said Andy Rowsell-Jones, vice president and distinguished analyst at Gartner. “Initially, CIOs were making a leap from IT-as-a-craft to IT-as-an-industrial-concern. Today, 20 years after we launched the first CIO Agenda survey, digital initiatives, along with growth, are the top priorities for CIOs in 2019. Digital has become mainstream.”
Scaling Digital Business
The survey found that 33 percent of respondents worldwide evolved their digital endeavors to scale, up from 17 percent in the previous year. The major driver for scale is the intent to increase consumer engagement via digital channels.
“The ability to support greater scale is being invested in and developed in three key areas: volume, scope and agility. All aim at encouraging consumers to interact with the organization,” Mr. Rowsell-Jones explained. “For example, increasing the scope means providing a variety of digital services and actions to the consumer. In general, the greater the variety of interactions that are available via digital channels, the more engaged a consumer becomes and the lower the costs to serve them are.”
Steady IT Budgets
The transformation toward digital business is supported by steady IT budget growth. Globally, CIOs expect their IT budgets to grow by 2.9 percent in 2019. This is only slightly less than the 2018 average growth rate of 3 percent. A look at the regional differences shows that the regions are moving closer together: The leader in budget growth is once again Asia/Pacific with an expected growth of 3.5 percent. However, this is a significant cut from the 5.1 percent projected budget increase in 2018. EMEA (3.3 percent) and North America (2.4 percent) both project an increase, while Latin America trails behind with projected growth of 2 percent in 2019, down from 2.8 percent this year.
“CIOs should use their financial resources to make 2019 a transformative year for their businesses. Stay active in the transformation discussions and invest time, money and human resources to remove any barriers to change. Enterprises that fall behind in digital business now will have to deal with a serious competitive disadvantage in the future,” said Mr. Rowsell-Jones.
AI and Cybersecurity Shape the CIO Technology Agenda
Disruptive emerging technologies will play a major role in reshaping business models as they change the economics of all organizations. Gartner asked CIOs and IT leaders which technologies they expect to be most disruptive. Artificial intelligence (AI) was by far the most mentioned technology and takes the spot as the top game-changer technology away from data and analytics, which is now occupying second place. Although this year’s question was asked slightly differently, the jump to first place is enormous: even the top performers went from only 7 percent to 40 percent.
With regards to implementation, when asked about their organization’s plans in terms of following digital technologies and trends, 37 percent responded that they already deployed AI technology or that deployment was in short-term planning. In fact, AI comes in second, behind cybersecurity (88 percent).
“On the surface this looks revolutionary. However, this bump in adoption rate of AI may indicate irrational exuberance instead,” said Mr. Rowsell-Jones. “While CIOs can’t afford to ignore this class of technologies, they should retain a sense of proportion. This latest batch of AI tools is yet to go through its trough of disillusionment.”
The strong focus on cybersecurity shows the necessity of creating a secure base for digital business that shields their organization and clients. The survey indicates that in most enterprises the CIO still owns the responsibility for cybersecurity. However, the IT organization alone can’t provide cybersecurity anymore.
The rise of social engineering attacks, such as phishing, require a broader behavioral change of all employees. In 24 percent of the digitally top-performing organizations, it is the boards of directors that are accountable for cybersecurity rather than the CIO alone. Nevertheless, to improve security against cyberthreats, in all organizations, CIOs are combining measures to harden information-processing assets with efforts to influence the people that use technology.
“Last year, I said that CIOs must start scaling their digital business. They excelled,” said Mr. Rowsell-Jones. “This year, they have to take it one step further and put their growing digital business on a stable and secure base. Success in the third era of enterprise IT hinges on a sound strategy that combines new, disruptive technologies with a rebalancing of existing investments.”
Will HCI change how we design and build core IT systems?
The way we build data centre systems is always in a state of flux. In theory, it is dictated by the operational needs of the business, the way IT seeks to architect its capabilities, and the developments brought to market by IT vendors. In practice though, data centre evolution for most organisations can be as much a matter of Darwinian evolution as it is guided by carefully planned long-term designs.
Recent research by Freeform Dynamics shows that one thing most data centre managers can probably agree on is, “If I wanted to get there, I wouldn’t start from here!” In other words, if we could build everything today as a greenfield site, rather than having to move forwards from where we are, our data centres would be very, very different (Figure 1).
As the chart shows, the data centre professionals we surveyed cited various reasons why they would like to design things differently. Many of them are, quite simply, direct consequences of the fact that most enterprise data centres are built in spurts to meet succeeding business requirements, as and when budgets for IT investment are made available. Indeed, web-scale operators are pretty much the only ones who can avoid constantly being in ‘reactive mode’.
This inevitably leads to computer rooms with a wide range of equipment, much of which was neither procured nor installed with the big picture in mind. This results in disjoints, inconsistencies and convoluted dependencies which make every new change a challenge, and quality of service difficult to regulate or vary. And as the last result in the chart shows, this can significantly and detrimentally affect how business stakeholders see IT’s effectiveness at meeting their needs speedily.
Is HCI the answer?
Two of the points in the research chart highlight challenges that HCI (hyperconverged infrastructure) was developed to address. Most notably, that’s the current shortage of automation, which means that IT professionals must deal with low value, routine drudgery, and the excessive drain on time that results from the challenges of IT infrastructure complexity. But do data centre professionals think HCI can help? (Figure 2)
While many believe that HCI could help to simplify and automate their IT infrastructures, at least in some areas, significant numbers think that while it has potential, it is not yet mature enough for them to deploy. And around one in five think it cannot deliver on the promises made in it its name, or simply don’t think it is relevant for them. This is hardly surprising as despite the many promises made by the marketing departments of IT vendors over the years, no one has yet invented a magic solution that can answer every IT complexity challenge.
Despite these concerns, HCI has already made an impact, with around one in seven respondents saying that its usage inside their organisations is already well established (Figure 3).
Beyond these early adopters, 40 percent of respondents are in the early stages of deployment or evaluating HCI, and another 14 percent have it on their agenda. The fact that almost a quarter have no activity or plans for HCI with another one in ten simply unsure about usage indicates that the industry may still have an education task ahead.
There is no doubt about the desire to simplify data centre operations and make greater use of automation, both to improve service and minimise risk, and to allow IT professionals to focus less on Infrastructure management and more on delivering additional business value. So if HCI isn’t the big answer to the question of IT, the universe and everything, how can we move forward? The answers are ones that it can be easy to overlook in our high-tech world: People, process and skills (Figure 4).
The results show that building and running the modern data centre requires IT professionals to extend their skills. Clearly, new skills will be needed to administer the new technologies as they come along, and HCI is no exception. But the combination of processing, storage and networking embedded in HCI systems is also reflected in the fact that the vast majority of respondents also see a need for IT specialists to broaden their skills to cover other disciplines. This requirement or expectation is not new, and has turned up regularly over the years in the surveys that Freeform Dynamics undertakes.
But the need for IT people to add to their skills also spreads into softer, people related areas. In particular, the survey shows how important it is now for IT teams to collaborate effectively across disciplines, but even more essential is the need for IT people to be able to communicate effectively with business stakeholders. Only by enabling regular interaction with business users can data centre professionals ensure not only that IT delivers what users need, but also that they can show stakeholders how IT can help them change as well.
The Bottom Line
The effective marketing of new IT solutions to business users gives IT professionals a chance to get onto the front foot in the relationship between IT and the business. Now IT has the potential to enable the business not just to work cost-effectively, but to work differently and expand into new areas. Remodelling how we build and run data centres gives IT professionals an opportunity to really show not only the business importance that IT delivers today, but also the potential value it can add tomorrow and in the years to come.
Now in its eighth year, the SVC Awards reward the products, projects and services and celebrate excellence in the cloud, storage and digitalisation sectors. The SVC Awards recognise the achievements of end-users, channel partners and vendors alike.
Following assessment and validation from Angel Business Communications’ SVC awards panel, the shortlist for the 23 categories in this year’s SVC Awards is viewable online and voting is a simple online process. Votes are coming in thick and fast and voting remains open until 9 November so there is time to make your vote count and express your opinion on the companies that you believe deserve recognition in the SVC arena.
Welcoming both the quantity and quality of the 2018 SVC Awards shortlist entries, Peter Davies, IT Publishing & Events at Angel, said: “I’m delighted that we have this annual opportunity to recognise the innovation and success of a significant part of the IT community. The number of entries, and the quality of the projects, products and people they represent, demonstrate that the SVC Awards continue to go from strength to strength and fulfil an important role in highlighting and recognising much of the great work that goes on in the industry.”
The winners will be announced at a gala ceremony on 22 November London’s Millennium Gloucester Hotel. There are limited sponsorship opportunities and tables available as sales are gathering pace, it is expected that this event will sell out.
Join over 200 guests for an evening rewarding excellence within the industry, comedy, entertainment, a band and plenty of networking opportunities from amongst all of the major players, and clients within the storage and digitalisation industry.
Whether you are shortlisted for an award or just fancy a great night networking with potential business partners, book your sponsorship package or tables today whilst they are available - and don’t forget to vote for your winners!
All voting takes place on line and voting rules apply. Make sure you place your votes by 9 November when voting closes. Visit : www.svcawards.com
SVC Awards 2018 Finalists
Infrastructure Project of the Year
Curvature supporting Carl Christensen
Park Place Tech supporting Cincinnati Bell
Scale Computing supporting Genting Casinos
Silverpeak supporting Mazars
Storage Project of the Year
Altaro supporting Beeks Financial Cloud
Datacore supporting Excel Exhibition Centre
Excelero supporting teuto.net
Tarmin supporting a specialist financial organisation
IT Security Project of the Year
Advatek supporting a global construction company
Mobliciti supporting legal firm
One Identity supporting B. Braun
Risk IQ supporting Standard Bank Group
Digitalisation Project of the Year
Amido supporting a radio station
Cleverbridge supporting SmartBear
MapR supporting Edwards Ltd
Node 4 supporting Forest Holidays
Runecast supporting NCC Media
Cloud Project of the Year
A10 Networks supporting SoftBank
Adaptive Insights supporting University of Edinburgh Accommodation, Catering & Events team
Aruba supporting Nexive
Node 4 supporting Forest Holidays
Qudini supporting NatWest
Six Degrees supporting AdvantiGas
Snaplogic supporting SIG
Orchestration/Automation Innovation of the Year
Anuta Networks - ATOM
Cantemo - iconik
Cleverbridge – Subcription Commerce Platform
Continuity Software - AvailabilityGuard
Opengear - Lighthouse 5
Park Place Technologies - ParkView Pro-Active Fault Detection
Tufin – Orchestration Suite
IT Security Innovation of the Year
Arosoft – IT SAFE
Barracuda – Managed PhishLine
Extra Hop - Reveal(x)
Ixia – Vision ONE
METCloud – Cyber Secure Hybrid Cloud Service
MobileIron – Threat Defense
Perception Cyber Security – Perception Solution
SaltDNA – SaltDNA Solution
Hyper-convergence Innovation of the Year
EuroNAS - eEVOS Plus
Exagrid - EX63000E
NetApp – HCI Solution
Pivot 3 – Cloud Edition
Schneider Electric - HyperPod
Backup and Recovery Innovation of the Year
Acronis – Backup Cloud
Cloudberry – Backup and Managed Backup
Datto - SIRIS
Exagrid - EX63000E
N2W Software – Cloud Protection Manager
Paragon Software – Hard Disk Manager
Redstor - InstantData
StorageCraft - ShadowXafe
Tarmin - GridBank Data Management Platform
Unitrends – Gen 8
Zerto – IT Resilience Platform
Cloud Storage Innovation of the Year
Acronis - Backup Cloud
Cloudberry - Backup and Managed Backup
Datto - SIRIS
Unitrends – Unitrends Cloud
Zerto - IT Resilience Platform
SSD/Flash Storage Innovation of the Year
Netapp - ONTAP 9.4
NGD Systems - Catalina 2 NVMe SSD with In-Situ Computational Storage
Pure Storage - FlashArray//X
Storage Management Innovation of the Year
Object Matrix - MatrixStore
Open E - JovianDSS
Tarmin - Gridbank
Virtual Instruments - WorkloadWisdom
SaaS Innovation of the Year
Adaptive Insights – Business Planning Cloud
Highlight – See Clearly solution
IPC - Unigy 360
Ipswitch - MOVEit Cloud Europe
Ixia - CloudLens Visibility Platform
Qudini - Appointment Booking Software
RA Information Systems - ezytreev
SnapLogic - Enterprise Integration Cloud
PaaS Innovation of the Year
6point6 Cloud Gateway – Cloud Gateway
METCloud - Cyber Secure Cloud Service
Premium Soft Cybertech - Navicat Premium
Excellence in Service Award
Eze Castle Integration
Park Place Technologies
Vendor Channel Program of the Year
Channel Business of the Year
Eze Castle Integration
RA Information Systems
Automation Company of the Year
Park Place Technology
Cloud Company of the Year
Eze Castle Integration
Hyve Managed Services
Co-location / Hosting Provider of the Year
Hyve Managed Services
Volta Data Centres
Hyper-convergence Company of the Year
IT Security Company of the Year
Eze Castle Integration
Storage Company of the Year
Leading Vendors, Solution and Service Providers to meet in Amsterdam in May 2019
IT Europa and Angel Business Communications will jointly be staging the third annual European Managed Services & Hosting Summit on 23 May 2019. The event will bring leading hardware and software vendors, hosting providers, telecommunications companies, mobile operators and web services providers involved in managed services and hosting together with Managed Service Providers (MSPs) and resellers, integrators and service providers migrating to, or developing their own managed services portfolio and sales of hosted solutions.
According to Gartner research quoted at the recent UK Managed Services & Hosting Summit the Managed Services sector continues to grow at about 35%, but the impact of new technologies and changing buyer behaviour in the face of evolving requirements is creating challenges for both vendors and MSPs alike. Under the theme of Creating Value with Managed Services, the European Managed Services and Hosting Summit 2019 will provide insights into how the market is changing and what it will take for MSPs to succeed as it evolves. Specific areas addressed will include the emergence and impact of Artificial Intelligence (AI) and the Internet of Things (IoT), the growing importance of security and compliance, trends in service delivery, and how to create value both within an MSP and for its customers.
The European Managed Services & Hosting Summit 2019 is a management-level event designed to help channel organisations identify opportunities arising from the increasing demand for managed and hosted services and to develop and strengthen partnerships aimed at supporting sales. Building on the success of previous managed services and hosting events in London and Amsterdam, the summit will feature a high-level conference programme exploring the impact of new business models and the changing role of information technology within modern businesses. These conference sessions will be augmented by both business and technology breakout tracks within which leading vendors and service providers will provide further insight into the opportunities for channel organisations looking to expand their managed services portfolios.
Throughout the day there will also be many opportunities for both sponsors and delegates to meet fellow participants within the Summit exhibition and networking area.
“Advances in technology, economic pressures and evolving business models are combining to fundamentally change the role of IT within businesses and the role of MSPs and other channels in delivering it,” says Alan Norman, Managing Director of IT Europa. “This is creating huge opportunities, but to compete successfully in creating value for their customers, MSPs will need to adapt and evolve to ride the latest waves of technological advance.”
“The European Managed Services & Hosting Summit is the leading European managed services event for the channel and provides a unique opportunity for vendors, VARs, integrators and service providers to come together to address the issues and opportunities arising from the surge in customer demand for managed services and hosted delivery models,” says Bill Dunlop Uprichard, Executive Chairman at Angel Business Communications.
The European Managed Services and Hosting Summit 2019 will take place at the Novotel Amsterdam City Hotel, on 23 May 2019. MSPs, resellers and integrators wishing to attend the convention and vendors, distributors or service providers interested in sponsorship opportunities can find further information at: www.mshsummit.com/amsterdam
In a change to the original topic, this month we are focusing on the emerging Liquid Cooling market. This follows the formation of the Liquid Alliance who recently met at the DCA Members Update Meeting at Imperial College London in September. The group comprises of a growing number of suppliers developing a range of liquid cooled solutions designed to cater for increasing power hungry servers.
To date air-based cooling systems continue to dominate the data centre market and for the majority of applications air will continue to be a perfectly acceptable medium as a means of cooling your host IT kit. However, if you are looking to cool 25kW + in a single rack you will need to be looking for possible alternatives to air, with heat transfer rates of water being about 3-4 thousand times more efficient than that of air it appears that liquid cooling offers a compelling next step for those looking to really push the rack density and server performance envelope.
Interestingly enough liquid cooling also has the potential of fixing a number of additional issues facing data centre operators. Increased agility in response to changing business needs is top of the list with reduced operating and utility costs a close second. Although it depends upon the liquid cooled solution deployed, in general terms far less infrastructure and power are needed to keep the servers at the right operating temperature when using liquid.
With increased legislation designed to reduce our environmental impact the implementation of a liquid cooled system makes it far easier to harness the waste heat and repurpose it. Air-based systems tend to be considered too low grade for reuse and air is simply exhausted into the atmosphere. Clearly not every data centre operator using liquid could or would want to tap into their local district heating system for the simple reason that at present there aren’t that many of them and they might not be passing anywhere near your average data centre location - well not yet at least! Data Centres could still utilise this waste heat in other ways, such as locally on premise.
Although PUE (Power Usage Effectiveness) as a metric has gained something on a bad rap due to certain operators fudging the figure in pursuit of competitive advantage, with a liquid cooled system outlandish claims of PUE’s in the range of 1.1 could actually be legitimately claimed and more importantly proven. Benefits are not limited to just the Data Centre itself but also the servers, even the removal of the cooling fans normally found in a traditional off the self-rack based server become redundant and can be removed, these alone can help to reduce the power consumption of a server by up to 15%.
So, I have discussed why I feel air-based cooling systems are likely to be around for a long time to come and touched on the merits of considering liquid cooling as an alternative for those applications needing 25kW and above but what’s the down side of using liquid? Initial investment cost could be one issue and the fear of introducing liquid into your data centre could be another - the liquid used could raise potential insurance concerns that would need to be looked into.
There are already many forms of liquid cooling solutions on the market, from rear door coolers and direct to chip, right through to cassette and single/double immersed systems. I appreciate that some of these terms may well be new to you and it is for that very reason that The DCA has chosen to run this special edition of the Journal. Equally the Liquid Alliance was formed to help educate and inform the market and The DCA is pleased to support them in this endeavour.
I would like to wrap up with a quick thank you to all those who supported The DCA SIG workshops at Imperial College on the 25th September and all those who attended the Annual Members Update Meeting which followed. Thank you again to the keynote speakers who represented RISE, Lloyds Register, Policy Connect and STEM Learning.
Next month’s DCA Journal is the Winter Edition and the final publication of the year. We will be reviewing 2018 and looking at prediction for the year ahead so please start taping those keys if you wish to submit an article. All submissions go to email@example.com and they need to arrive on or by the 15th November, full details can be found on the DCA website https://dca-global.org/publications
By: Maikel Bouricius, Marketing Director Asperitas and lead AsperitasEI (Energy Innovation)
Recently we have seen a new wave of attention towards liquid cooled datacentre solutions for the ability to improve the energy efficiency of datacentres and the ability to facilitate high performance cpu’s and gpu’s in an extreme high density. Well deserved if you would ask me, Asperitas unveiled a 1u server configuration ready for our Immersed Computing® solution with 12GPU’s, resulting in 288 GPU housed in one AIC24 solution. On one rack footprint but half the height only.
Part of the attention is driven by the initiatives taken by the Open Compute Project, a now well established community of internet players, on a topic labelled as “advanced cooling solutions”. The workgroup around this topic is exploring and developing standardization around the wider range of liquid cooled solutions. Asperitas is leading the workstream on immersion cooling and we will be working together with other solution providers in this field, server OEM’s and end users to jointly bring the maturity of the ecosystem around immersion cooling to another level. Besides the energy efficiency and density benefits mentioned earlier, we see another topic gaining traction in this workgroup and beyond. The energy reuse potential is one that is on the shortlist of large scale datacentre users and operators. Liquid cooled solutions will be able to bring more volume and more value of heat outside the datacentre in a form other stakeholders are happy to work with: water. Immersed Computing® can bring hot water of 50 degrees Celsius out to district heating grids or directly to end users of heat. Internally we have been able to run experiments with up to 65 degrees output temperature already. We are confident we will be able to reach higher temperatures when we can innovate together with OEM and chip manufacturers. Communities like the Open Compute Project’s workgroup and the Liquid Alliance initiated by the Data Centre Alliance, research institutes and liquid solution providers including Asperitas are a great place to do so.
Recently we participated as a partner of the Datacentre Reuse & Innovation Summit organized by the Dutch Datacentre Association. One of the most interesting talks was on the story of Stockholm as a datacentre hub in development and the role of datacentre heat reuse in the strategy of Stockholm Data Parks. Their message was clear: datacentre heat is valuable and we are willing to pay for it. Heat is in demand in most European countries and many of them have set climate goals directly leading to the electrification of heat. We see this happening in the Netherlands and there is enough datacentre heat for 2 million households in theory. Innovation within the datacentre, technology and business model wise and outside the datacentre are needed to be able to connect supply and demand on large scale. Immersed Computing® can do so and we are currently involved in large scale datacentre projects where datacentre heat utilization has been part of the core design of both datacentres and urban area developments.
The mission of the Asperitas team is to enable sustainable datacentres to facilitate high performance cloud and compute anywhere they are needed. Both the digital applications and the heat demand could determine where datacentres are needed soon.
So far, the datacentre industry focus has been on stability, resilience and the mission critical role. Which I think we will all agree on this should be the fundament. If we can bring in high quality cleantech innovation, we’ll discover new opportunities and strengthen the fundament though. Organizations like the Data Centre Alliance could be play an important role in this. Corporate Enterprises know there are only a few ways to get more innovation in your organization. Invest in R&D heavily, acquire innovative solutions or bring in the brightest minds. The last option has been a topic for many years now in the datacentre industry and the question came up during the last annual member meeting again. How can we get the next generation of engineers and innovators interested to go into the datacentre industry?
I truly believe we can get the brightest young minds into the industry when we will show the role datacentres can play in both the digital and energy infrastructure industries and the impact innovation can have.
With Asperitas we had the honor to be participating in the New Energy Challenge organized by Shell. A week-long event together with 10 other cleantech companies with innovative solutions which could have an impact in the energy transition. This week opened my own eyes once again, the quality of the solutions and the teams were extremely high, and I realized as an industry we failed to be attractive for innovators like this. When we manage to give room for innovation we will see new solutions for this industry that we cannot imagine yet.
To finish with a call to action. Asperitas is expanding rapidly and are working on some exciting new developments around Immersed Computing® including liquid optimized server configurations for applications like Artificial Intelligence and other high-performance compute and cloud platforms. We have room for the brightest engineers we can find and will make room for innovative ideas and solutions.
Get in touch!
By Maciej Szadkowski, CTO, DCX
Data Centre Direct Liquid Cooling systems are no longer a glimpse into the future, they are now a reality. This article presents the steps required to make the change to liquid cooling.
There are many articles already praising liquid cooling technology, although not a single one actually provides informed advice how to implement liquid cooling technology and why data centre operators should start working now on direct liquid cooling adaptation.
Implementation of liquid cooling is cost effective and simple solution for energy cost, climate change and regulatory challenges. In addition to this most liquid cooling systems can be implemented without disruption in operation. I will advise how to choose architecture that will support specific heat rejection scenario, what is required and how to conduct smooth transition from air cooling to liquid cooling. Let’s find out what must be done.
1. Change your mindset
None of us like change. We like to keep things the same, we accept industry standards and preferences without the second thought and avoid unnecessary risks. To move on from Air Cooling effort must be made. Liquid cooling has evolved since being introduced in late sixties by IBM. Current DLC / ICL vendors started around 2005 and in that time have produced thousands of cooling components. We can assume that liquid cooling systems are now both proven and mature. The biggest risk related to liquid cooling is that you might not evaluate it.
Asetek server level components including processor and memory cooler along with rack level cdu device connected by dry break quick couplings.
2. Check the facts and standards
The Uptime Institute 2018 Data Centre Survey Results showed that 14% of datacentres have already implemented liquid cooling solutions. There will be a data centre near you that has implemented liquid cooling or performing Proof Of Concept implementation. From my experience – most of cloud provides like to keep their liquid cooling system secret as they provide a competitive advantage.
In terms of standards, in 2011 ASHRAE introduced Thermal Guidelines for Liquid-Cooled Data-Processing Environments and then Liquid Cooling Guidelines for Datacom Equipment Centres 2nd Edition in 2014. Thermal Guidelines for Data Processing Environments 3rd Edition included insight into other considerations for liquid cooling covering condensation, operation, water-flow rate, pressure, velocity, and quality, as well as information on interface connections and infrastructure heat-rejection devices. Currently LCL.GOV and OCP started liquid cooling standardisation effort for wider adoption. The 2011-2014 standards have evolved, there are couple of vendors that deliver full portfolio of DLC / ILC systems.
3. Do your own research
The problem is that not many data centre infrastructure integrators know anything about liquid cooling. You will also not hear about liquid cooling from data centre HVAC people. It simply hurt’s their business.
You have to do your own research and look actively for available solutions. If you want to consider more expensive proprietary systems – Dell, Lenovo, Fujitsu and some other players like Huawei already have direct chip liquid cooling solution (and also immersion system in case of Fujitsu). These are attached to specific server models, mostly HPC platform which is a constraint. There are already many direct chip and immersion liquid cooling vendors with complementary solutions that may be applied to your servers or can accommodate most of the servers on the market.
Liquid distribution rack level component in test installation for experimental liquid cooling system with medical grade quick couplings. Notice color coded flexible tubing.
4. Evaluate hot coolant, direct liquid cooling systems only
There are established far from heat source liquid cooling systems (CRAH, Overhead, InRow™, Enclosed Cabinet, Rear Door Heat Exchanger), but when the heat is transferred directly from the source, the facility supply liquid temp may be “warm water” (ASHRAE Class W4: 35°C to 45°C) or even “hot water” (W5 above 45°C). With Delta T of 10 °C outlet temp may reach over 55 °C which facilitates heat re-use for building or community heating.
Don’t mix chilled water cooling with direct liquid cooling. The goal is to resign completely from mechanical cooling and use liquid – free cooling combo for the whole year. We want to resign from chilled water use and extract heat directly from components, not from the air inside the rack or data centre space.
5. Asses and measure the benefits
Benefits of liquid cooling are widely known. But what it means in reality?
Well, let me provide few examples:
6. Know required Direct Chip Cooling or DLC components
Sometime less is more and that’s the case with direct to chip liquid cooling. Everything becomes simpler. Direct chip liquid cooling (DLC) is usually installed within any standard rack and fits nicely into any server model. This allows for retrofitting of existing data centre infrastructure without major overhaul.
7. Know required Immersion Liquid Cooling (ILC) components
Immersion liquid cooling completely changes current the datacentre infrastructure. Existing standards are not sacrosanct, and most people doesn't even know why 19” racks are the actual standard, why servers look like pizza boxes, what is the purpose of rack doors, what was the reason to use raised floor and finally why 90% of racks are pitch black. We just accept those without asking questions. The industry group (Open Compute Project) questions everything and we can expect some changes soon. Immersion cooling will be coming into data centres. Many experts believe that this will be ultimate future liquid cooling system and DLC is just a step ahead in energy efficiency competition. I’m convinced that full immersive cooling with dielectric fluids is the future of thermal management for ICT components but also for batteries and electric components.
See the full article for more details on Immersion Liquid Cooling (ILC).
8. Choose the technology and supplier
Looking into two most recognised direct liquid cooling systems you may find at least 15 vendors to choose from.
My personal advice – look for open system, not custom “boutique solution” that requires special servers to operate. Look for flexibility – you should be able to operate the same DLC loop using different type of servers.
The best way to start - just ask those people about their offering, features, competitive advantages, experience, available options, and prices. Simple as that
9. Assess the risks and shortcomings - keep it real
There are many myths about liquid cooling, usually written by technology journalist without experience on this technology. Let’s keep it real about the risks and limitations cooling technology may have and how to mitigate those. It’s not hard to avoid critical issues. Octave Klaba, CEO of OVH (with over 350 000 liquid cooled servers in operations) confirms 3 to 4 leaks per year impacting few servers each time.
10. Finally, the secret of getting ahead is getting started
Contact the vendors, check available solutions. Do the POC implementation. You don’t want to be left behind. Charles Darwin said “It is not the strongest of the species that survives, nor the most intelligent. It is the one that is most adaptable to change.” The change is coming!
Green Revolution Cooling simple baths used for HPC cluster.
Peter Poulin, CEO, GRC (Green Revolution Cooling)
This excerpt is taken from GRC’s case study United States Air Force: Containerized Data Center to read the full case study click here
For these organizations, a turnkey containerized data center using immersion cooling technology and designed inside an ISO standard 40-foot container can meet their needs.
Typical concerns organizations have when considering non-traditional deployments like a pre-fabricated containerized data center:
A well-crafted turnkey containerized data center can be deployed in less than 24 hours, functions in a wide range of weather conditions and physical environments, is designed to require minimal support equipment or on-site maintenance and is built with monitoring and controls that can be managed remotely.
With a standardized design and manufacturing process, these solutions enable organizations to build infrastructure quickly and cost-effectively, while addressing their mission critical needs to house and manage their data wherever needed.
A government organization needed to deploy containerized data centers in two remote locations. They had several specific requirements:
Container A was designed with a cooling tower. Container B was designed with a dry cooler, which provided a greater range of low temperature. A hybrid cooler design was completed for maximum range.
Container A was purpose built with servers designed without spacing for fans or air flow shrouds, allowing denser configurations. An ISO standard 40-foot container with eight racks of servers designed for immersion (SDI) computes the equivalent of 153 racks of air-cooled servers.
Container B was a simpler system than container A. Installation was reduced by 90% and took a single person less than 20 work hours, not including connecting the electrical service to the container or placing the cooler on top. This dry cooler also has no water usage or chiller requirements, eliminating the need for drainage and reducing the number of utility connections.
For both containers, the organization saw significant performance and energy savings:
Containerized data centers like those in the case study are designed with a custom floor so that most of the supporting infrastructure is underneath the walkway. Utility connections are built as plug-and-play – with all electrical infrastructure downstream of the exterior cutoff prewired and water and drain connections (when necessary) through the floor for security and freeze prevention.
Prefabricated turnkey containerized data centers are the most automated, efficient, and easy to transport modular data centers. They also cost less to deploy and operate: The standardized design reduces capital and installation costs, and the immersion cooling and densely packed and more efficient servers keep power costs down. Because they are designed without external insulation and with reduced infrastructure on the roof, these units can be transported via truck or train like any standard ISO 40-foot container, keeping shipping costs lower, as well.
For organizations that require flexibility in deploying their data centers, whether as stand-alone systems or adjunct to existing facilities, and that require environmental resiliency and easy operation to deal with remote or harsh operating conditions, containerized data centers using immersion cooling are the state-of-the-art solution.
GRC’s (Green Revolution Cooling) patented liquid immersion cooling technology breaks through the limitations of other methods to deliver a dramatic improvement in data center performance and economics.
Immersing servers in liquid has shown to improve rack density, cooling capacity, data center design and location options. Our proven, highly flexible and simple design makes it easy to quickly build and run a super-efficient operation and respond rapidly to business demands, without over-provisioning.
Green Revolution Cooling (GRC) was founded in the USA in 2009 with the vision to revolutionize the way data centers are designed, built and operated.
To find out more visit: www.grcooling.com
Jon Summers, Scientific Leader, RISE North, Luleå, SWEDEN
One of the main hardware components of the digital age is the digital processing unit (DPU), whether central, graphical or otherwise it consumes a major portion of the energy in the operation of data centres. The ratio of the surface areas of a DPU to the area of a data hall (usually a room within a data centre) in which DPUs are housed can be as high as 1 to 20 million. The data centre draws electrical current from the electrical power grid and more than 50% of this current terminates at a distributed array of DPUs, where up to 200 Amperes are pumped into an area of approximately 200 square millimetres. It is not difficult therefore to realise that the operational voltage of the DPU combined with the incoming current consumes power, but there are no moving parts inside the DPU to consume this energy at any time rate and so all of this electrical power in a 200 square millimetre area is converted to thermal power.Thermal management of data centres has therefore evolved to become a broad engineering discipline that involves equipment for thermal connections and transfers to carry the thermal power away from the DPUs so that they can continue to perform their tasks without overheating. The transferred thermal power from the DPUs is rejected to the outside environment. Traditional thermal connections used today are heat sinks and heat exchangers with thermal transport being achieved by forced convection of air, however there are limits to this approach of moving the heat from the DPUs, the limits are due to the low thermal capacity of air and upper limits of practical airflow rates.
Different technology (Polar and CMOS) heat fluxes by year based on Roger Schmidt, Liquid Cooling is Back, 2005.
Historically DPU heat fluxes have been increasing and are expected to continue to rise, as shown in the above figure, with values greater than 1 MW per square metre for some applications. With these values of heat fluxes, it is not practical to remove the heat totally with air, so there is a growing spectrum of heat removal technologies that employ liquids. One can argue that data centres have already been making use of liquids to remove heat from the data halls with Computer Room Air Handling/Conditioning (CRAH/CRAC) units, where chilled water or a refrigerant are used respectively to transport thermal energy out of the data hall. However, the term liquid cooling is referring to the use of liquids that come near the information technology racks. Therefore, this requires the data centre to distribute liquids into the data hall, perhaps in the same fashion as electrical power is distributed with the use of power distribution units (PDUs). There would be the requirement for cooling distribution units (CDUs), the number of which would depend on the amount of thermal energy to collect and the volumetric flow rates of the liquid coolants in the loops. There is an analogous relationship between the current (rate of flow of electrons) and the liquid flow rate (rate of flow of thermal energy) with electrical and thermal power being given by multiplying the flows with the voltage and temperature differences respectively.With the above definition of liquid cooling, there are naturally two approaches, one where the liquid does not penetrate the air cooled IT, but does penetrate the IT racks, which is commonly called Indirect Liquid Cooling (ILC) and the other in which liquids penetrate the IT and collect thermal energy directly from components inside the IT systems, referred to as Direct Liquid Cooling (DLC). An example of the former is the rear door heat exchanger (RDHx), whereas the later could be direct to DPU cold plates with heat sinks in the liquid path. These different approaches are shown in the schematic below.
Figure: from left to right depicting liquid cooling at (a) the edge of the data centre, (b) edge of the rack (ILC), (c) into the IT space (DLC) and (d) Total Liquid Cooling (TLC) = no requirement for airflow. Source: Y.Q. Chi, PhD Thesis, University of Leeds, UK
The case of Total Liquid Cooling is a special version of Direct Liquid Cooling where all heat bearing components within the IT systems are connected to the liquid cooling loop and there is zero requirement for air cooling, an example of this category would be dielectric liquid immersion. The High Performance Computing (HPC) solutions are already making use of ILC, DLC and TLC approaches to manage the higher heat fluxes and now the Advanced Cooling Solutions working group of the Open Compute Project are using similar distinctions of liquid cooling.
As evidenced by HPC systems and the adoption of DLC by Google for their Tensor Processing Units (TPU), the practice of heat transfer from the DPU using liquid cooling is well established and can use one of many approaches, such as cold plates with single phase or two-phase liquid flows driven by natural or forced convection, impinging jets of fluids with different geometries of heat exchangers and heat spreaders, different dielectric liquids providing both direct and indirect contact. These methods are all very capable of managing the kW per square centimetre heat fluxes from the DPUs and with the growing heat fluxes in the future with no anticipated technology shift in the DPU, there will be a rise in the adoption of liquid cooling. The numerous companies that offer liquid cooled solutions today have collected a wealth of operational understanding and so issues of maintenance have matured and are well established. Liquid cooling has arrived, and data centres will have to consider how to integrate liquid distribution into the data halls to facilitate liquid to the IT rack to be future ready.
SNIA Emerald Specification is an established building block for regulatory agencies to define data centre storage energy policies.
By Wayne M. Adams, SNIA Chairman Emeritus, SNIA Green Storage Initiative.
Within the European Union, the annual energy consumption related to servers directly is expected to be 48 terawatt hours (TWh) in 2030, which increases to 75 TWh when the annual energy consumption related to infrastructure (e.g. cooling systems and uninterruptible power supply systems) is also included. The annual energy consumption of data storage products is expected to be 30 TWh in 2030, 47 TWh when infrastructure is also included. The preparatory study shows that use-phase energy consumption by servers and data storage products can be significantly reduced. (*See Footnote 1)
To enable national regulatory agencies around the world with data center storage energy efficiency metrics and policies, the Storage Networking Industry Association (SNIA) has been working diligently for more than a decade through its Green Storage Initiative (GSI) to establish the SNIA Emerald Power Efficiency Measurement Specification. The specification encompasses networked storage systems for block IO and file-system IO data communications, which represents the vast majority of storage systems deployed in data centers. The specification comprehensive, vendor product neutral, and has been in use since 2013. One of the next steps SNIA is taking is to seek International Organization for Standardization (ISO/IEC JTC 1) recognition.
Companies with data centers, understandably, have become more careful about the type of equipment they deploy. As well as better performance and added capacity, they are demanding lower power consumption. SNIA in its early industry analysis concluded that the maximum energy consumption profiles for power planning was not adequate to determine energy usage operations on average daily/weekly/monthly basis. This led SNIA to move beyond vendor specific product spec sheets and vendor specific energy calculators to determine which equipment models and specific configuration are optimal in terms of energy efficiency.
SNIA worked across industry with all the storage manufacturers to create energy metrics that allow IT planners to compare a range of possible solutions. An objective, metric-based approach enables planners to select the mode of storage usage and data protection that accomplishes business goals accompanied by understanding energy consumption trade-offs. In addition, it encourages vendors to produce more energy efficient products.
The SNIA Emerald Power Efficiency Measurement Specification details rigorous test methodology based on industry proven tools for the measurement of the power efficiency of storage systems under typical data center conditions. It covers block storage, file storage and will soon provide a uniform way to measure solid state, converged storage, and object storage. It offers a standardized method to assess the energy efficiency of commercial storage products in both active and idle states of operation. It has evolved a detailed taxonomy to classify storage products in terms of operational profiles and supported features. Test definition and execution rules are carefully laid out to ensure adherence to the standard.
The U.S. Environmental Protection Agency (EPA) Data Center Storage Energy Star Program references the SNIA Emerald Specification. It provides an industry voluntary participation program to receive and post test reports for storage equipment under the umbrella of the Energy Star Data Center Storage (DCS) specification. DCS metrics and measurements are performed according to the SNIA Emerald specification and include some additional EPA test criteria, system component ratings and requirements.
The US EPA program has close to 200 storage product test reports publicly posted for block IO and most recently file IO. The repository of reports provides an industry view of vendor products, models, and their energy efficiency metrics, and serves as a quick reference for an apples-to-apples comparisons on energy metrics between many vendors storage products.
Towards a Global Standard
Since configuring and measuring data center storage is capital, time, and resource intensive, SNIA has adhered to its objective to create a single test methodology, in essence a building block, for all national regulatory bodies to reference. As a building block, it enables national bodies to establish any additional criteria for testing, measurement, and configurations that meet geographic preferences and priorities. The national body should have confidence that the SNIA Emerald test methods are proven, fair, and sound, to avoid a national body expending limited resources on a multi-year effort to establish a similar test method.
The SNIA Emerald Specification is recognized by several national bodies in one form or form or another, in different stages of industry program rollout including the U.S. EPA Energy Star program, Europe’s EU LOT 9 program, and the Japanese Top Runner program. SNIA recognizes that many national bodies prefer to reference an ISO/IEC specification instead of an industry body specification. With that requirement in mind, SNIA will be submitting SNIA emerald V3.0.3 to the ISO/IEC JTC 1 PAS process.
SNIA is a well-established global forum for storage specification work which invites the participation of all vendors. As such, specific to data center storage, its members who have been involved in the SNIA Green Storage Initiative and SNIA Emerald Specification development represent more than 90% of all shipped storage units and capacity.
SNIA Green Storage Initiative also undertakes the task of training test engineers and independent test labs on the proper use of the Emerald specification to ensure it is implemented with uniform results. SNIA records and posts its training materials for public use. SNIA also hosts an annual stakeholders meeting with the EPA to further advance the Energy Star Program. SNIA has met several times with the Japan Ministry responsible for the Top Runner Program to align timetables for future reference of the SNIA Emerald Specification. SNIA in collaboration with The Green Grid and Digital Europe has made many recommendations for the EU Lot 9 specification work.
The SNIA Green Storage objectives with the SNIA Emerald specification include the following:
The SNIA Emerald Specification is steadily evolving to keep pace with storage component and system innovation. Storage engineers and architects continue to meet as part of the SNIA Technical Work Group working on next version of the specification, analyze test results posted to public agencies, and support the global testing community. SNIA encourages all storage vendors to participate in its stakeholder meetings and training. Additionally, we encourage storage vendors to work in our community to accelerate the rate of new specification development and validate test methods/tools.
For more information:
To review US EPA Energy Star storage system test reports, visit
To contact SNIA Emerald Program and the author, email emerald@SNIA.org
1. (Reference: Regulation (EU) No 1025/2012 of the European Parliament and of the Council of 25 October 2012 on European standardisation, amending Council Directives 89/686/EEC and 93/15/EEC and Directives 94/9/EC, 94/25/EC, 95/16/EC, 97/23/EC, 98/34/EC, 2004/22/EC, 2007/23/EC, 2009/23/EC and 2009/105/EC of the European Parliament and of the Council and repealing Council Decision 87/95/EEC and Decision No 1673/2006/EC of the European Parliament and of the Council (OJ L 316, 14.11.2012, p. 12).
In this issue of Digitalisation World, there’s a major focus on the advent of 5G technology. Where 4G perhaps underwhelmed, there seems to be no shortage of excitement surrounding 5G technology and its potential to transform the enterprise. As one contributor puts it: “5G is a technological paradigm shift, akin to the shift from typewriter to computer.
Part 1 of this article covers a recent research report covering next-generation connectivity, with the other parts bringing you the views and opinions of a variety of industry experts.
New research commissioned by Osborne Clarke reveals global businesses must overcome significant barriers to embrace next era of connectivity.
New research commissioned by international law practice Osborne Clarke reveals that businesses in Germany and the Netherlands could be leading the global race to embracing next-generation connectivity. The Next Generation Connectivity research of executives and managers from 11 countries, conducted by the Economist Intelligence Unit, also reveals that approaches and attitudes to adopting connectivity vary country to country. This, Osborne Clarke says, could hinder future opportunities.
Across the global businesses surveyed, those in Germany and the Netherlands were found to be the most advanced in their adoption of connectivity. What’s more, executives from these countries were most likely to think that connectivity – including 5G – is important to their business and they were also the most positive about the business applications greater connectivity enables.
Jon Fell, partner at Osborne Clarke said, “There is a great deal of optimism among businesses around the adoption of next-gen connectivity – and rightly so. With greater speeds and capacity, along with lower latency, companies can transform how they do businesses and enable new applications – whether that’s driverless car technology, remote surgery, sophisticated real-time drone management or even building smart cities.”
Preparing for the future
In preparation for the introduction of next-generation connectivity, businesses in Germany were most likely to have adopted a formal strategy, with over two fifths of German executives (44%) saying their business has done so compared to just 22% of Chinese respondents. In fact, 36% of German executives say their business has already built a stand-alone division to prepare for the adoption of next-generation connectivity.
US businesses, too, have made significant steps to prepare for the adoption of greater connectivity, with half of US executives (50%) saying they have already made investments in preparation – the highest number across all countries. In comparison, just 18% of French businesses and 24% of Chinese businesses said they had made such investments.
Commenting on these findings Jeremy Kingsley at the Economist Intelligence Unit said, "In Europe there have been concerns from carriers about the business case for next generation connectivity - particularly 5G - and investment has been slower than in US. In China too, we've seen that the Chinese government and telcos have spent a lot investing in 5G infrastructure, much more so than the US. However, our research shows that while China is spending more on infrastructure, Chinese businesses are spending less preparing for it.
“5G will enable all kinds of opportunities and innovative use cases that are up to businesses to anticipate or invent. Few saw Uber or Spotify coming as a result of earlier generation connectivity. Demand will come from new use cases that stem from business innovation.”
Addressing the skills gap
More than two in five respondents globally (42%) say talent and skills is a significant barrier their organisation faces with regards to next-generation connectivity. In the UK, especially, business executives were the most likely to say they do not have the talent or skills they need to capitalise on next-generation connectivity, despite UK respondents being most likely to believe connectivity will be more important in the next five years (92%).
Businesses in the Netherlands appear to be tackling this problem head-on, with 44% of respondents saying their organisation has hired new talent in preparation for the adoption of next generation connectivity – significantly higher than the global average of 26%.
Barriers to overcome
In addition to skills and talent, business executives and managers identified further challenges preventing them from adopting next generation connectivity. The costs required for investment infrastructure topped the list, with 44% businesses citing this as a main barrier to embracing connectivity, followed by a lack of talent and skills (42%) and security concerns (39%).
Concerns over data protection and privacy were also a top concern for those in digital businesses, with 41% of executives in this sector identifying this as the main barrier to adopting greater connectivity.
Fell continues, “There is, of course, the risk that greater connectivity will lead to more opportunity for cyber-criminals to gain access to a company’s network and data. We shouldn’t, however, let this fear of the unknown hinder adoption. Instead, businesses need to take the steps to prepare, enabling them to respond to threats much more quickly in this new age of connectivity.
“Reaping the benefits of this golden era of superfast, always on, ubiquitous connectivity will certainly require investment, new partnerships, and redesigned approaches to security and data protection – but it will be worth it.”
In fact, the research finds that 87% of global businesses believe greater connectivity will be strategically important to the running of their business by 2023. Nearly seven in 10 businesses (69%) believe that next-generation connectivity will have the greatest positive impact on the level of customer service and support it delivers. Furthermore, 67% of businesses say that greater connectivity will positively impact supply chain management and 64% say it will improve employee productivity.
Today’s technologies are advancing at a rapid speed. For businesses to remain competitive, it is crucial that they are aware of the changes taking place – and learn to master them quickly if they are to reap the business benefits.
By Ben Lorica, Chief Data Scientist, O’Reilly.
One of the most recent technologies we have seen take off, is artificial intelligence. Since 2011 – artificial intelligence has become more and more engrained in our daily lives. While we continue to be surrounded by AI scare-mongering in the media, this still hasn’t deterred businesses from leveraging AI to transform their operations, both internally and externally.
It’s easy for businesses yet to achieve success with AI – or even to get started on their journey – to get despondent about the lead they perceive their competitors to have. They often see AI in absolutist terms: you either have cross-organisational, fully automated systems, or none at all.
But AI isn’t a binary – it’s a spectrum: one where successful applications are built on a platform of smaller, successful projects, which themselves were the result of trial and error. Rather than betting the farm on rolling out AI across the enterprise as quickly as possible, it's much more effective to experiment with initiatives that deliver real benefits on a smaller scale.
That doesn’t change the fact that there are several obstacles standing in the way of successful AI projects. None of these is insurmountable; nevertheless, organisations must understand what difficulties they need to overcome in order to develop and deliver projects that solve real business challenges.
The challenges of AI implementation
Earlier this year O’Reilly asked over 3,000 business respondents about their preparedness for AI and deep learning, including their adoption of the necessary tools, techniques and skills.
Of particular note is an AI skills gap revealed in the survey. A paucity of talent is seen as by far the biggest bottleneck for successful AI projects, identified by a fifth of respondents. This is an especially big issue in AI projects, since building such applications from scratch relies on end-to-end data pipelines (comprising data ingestion, preparation, exploratory analysis, and model building, deployment, and analysis).
It’s not just technical talent that enterprises need, though. They also require people with the business acumen to make strategic decisions based on the data and insights that AI provides.
Deep learning remains a relatively new technique, one that hasn’t been part of the typical suite of algorithms employed by industrial data scientists. Who will do this work? AI talent is scarce, and the increase in AI projects means the talent pool will likely get smaller in the near future. Businesses need to address the skills gap urgently if they are serious about developing successful AI initiatives. This will likely involve a mixture of employing outside consultants and developing the necessary skills in-house – for example, by using online learning platforms.
To be fair, most businesses in our survey (75 per cent) said that their company is using some form of in-house or external training program. Almost half (49 per cent) said their company offered “in-house on-the-job training”, while a third (35 per cent) indicated their company used either formal training from a third party or from individual training consultants or contractors.
The other side of the coin – the business rationale for AI – requires management to identify use cases and find a sponsor for each specific project, ensuring there is a clear business case that is served by the technology.
The importance of data
Another key challenge to successful projects is ensuring that the data used is completely accurate and up-to-date. Machine learning and AI technologies can be used to automate – in full or in part – many enterprise workflows and tasks. Since these technologies depend on pulling information from an array of new external sources, as well as from existing data sets held by different internal business units, it’s obviously essential that this data is properly labelled.
The first step in this process is to establish which tasks should be prioritised for automation. Questions to ask include whether the task is data-driven, whether you have enough data to support the task, and if there is a business case for the project you plan to deliver.
Enterprises must remember that while AI and ML technologies can work “off-the-shelf”, to get the most out of them requires them to be tuned to specific domains and use cases, perhaps involving techniques such as computer vision (image search and object detection) or text mining. Tuning these technologies often – essential to delivering accurate insights – demands having accurately labelled large data sets.
Designating a Chief Data Officer is key to solving the challenge of accurate data. A CDO is responsible for thinking about the end-to-end process of obtaining data, data governance, and transforming that data for a useful purpose. Having a skilled CDO can help ensure that AI initiatives deliver their full capability.
Using the best deep learning tools
Returning to our research, three quarters of respondents (73 per cent) said they’ve begun playing with deep learning software. TensorFlow is by far the most popular tool among our respondents, with Keras in second place, and PyTorch in third. Other frameworks like MXNet, CNTK, and BigDL have growing audiences as well. We expect all of these frameworks—including those that are less popular now—to continue to add users and use cases.
These deep learning libraries are all free and open source, and are used by most AI researchers and developers. It’s important to take the time to familiarise yourself with these tools. Consequently, it will become easier to join the collaborative AI community where people share papers and code.
To reap the benefits of AI, businesses need to ensure they focus their efforts on investing in training, ensuring they have access to quality data and take advantage of all expertise available to them. In doing so, there’s no reason why they can’t make a successful leap into the world of AI.
It is only natural for companies across the world to look at Google, Amazon, Facebook, and other “big guys” who have their server infrastructures customized and fully cost-optimized. There are significant differences in capabilities between most companies and a company like Google, but Enterprise IT staff can still learn a lot from what the big guys do when it comes to optimizing hardware in the software-defined data center (SDDC). First, we will examine what the big guys do to be so successful.
Big Guy Success Factors
The big guys – Google, Netflix, Amazon, Facebook, etc. – use optimized white box servers in their SDDCs. They do this because white boxes are less expensive, infinitely more customizable, and often more effective than standardized servers from big-name vendors. For example, a company like Google has very specific needs in their servers that standardized servers cannot offer, so the ability to customize and only buy servers to fit their exact specifications enables Google, and anyone else using white boxes, to optimize their infrastructure. Trying to customize standard off the shelf servers to fit the needs of a large company takes a great deal of effort, and with servers not doing exactly what they’re intended for, problems will arise eventually. Both of these issues can be costly in the long run. By using white boxes, which are cheaper from the outset, and meet specifications exactly, the big guys have found a way to save money and create infrastructure that is exactly right for what they want.
However, it is nonsensical for almost every company to directly emulate the practices of massive companies like Google, as there is no comparison to make in terms of server infrastructure. Google famously has 8 data center campuses in the United States and 7 more positioned around the world. The largest of these facilities in the United States, located in Pryor Creek, Oklahoma, is estimated to have a physical footprint of 980,000 square feet, and cost Google about $2 billion to build and bring live. These data center facilities worldwide support near incomprehensible amounts of data.
For example as of March 2017, Google’s data centers process an average of 1.2 trillion searches per year. Google doesn’t disclose exact numbers regarding its data centers, but the total number of servers in the data centers worldwide has been estimated at roughly 2.5 million. All of these facts perfectly illustrate the difference between Google and its peers (and every other company).
To nurture white box compute initiatives across many industries, the big guys work together to create standards and release technical information to be used throughout the world. Facebook – another big guy in the world of white box servers – began the Open Compute Project in 2011. This project is now an organization made up of many large corporations (including Apple, Cisco, Lenovo, Google, Goldman Sachs, and others) in which the open sharing of data center technology is encouraged. This sharing promotes innovation, and pushes the big guys further ahead of the pack in the server infrastructure race. Therefore, smaller companies now can leverage the big guys’ expertise in their data centers.
How “Not-As-Big Guys” can be Successful
Despite the unique capabilities and infrastructures the big guys have deployed, not-as-big guys can leverage learnings from the big guys. Most companies will never have 15 global data centers or be part of an organization promoting unique and innovative server designs, or be able to spend $2 billion on server infrastructure. However, every company can still utilize perhaps the most important aspect of the big guys’ massive data centers: the custom white box servers inside of them.
No matter how customized the Open Compute Project has made big guys’ server infrastructure, the components inside the servers the big guys use are best of class commodity parts. They are available for purchase by anyone. Secondly, while the configurations of an organization like Google’s servers are often unique, and often have some unique components, the use of virtualization software, such as VMware and vSAN can be instrumental in allowing companies much smaller than Google to fully optimize their servers. The first step for these small companies is to invest in white boxes.
White box servers are custom built from commodity parts to meet the specifications of each customer. In the past, the impression of white boxes was that they were of a low build quality, with little care for quality assurance. That may have been the case decades ago, but today, white boxes are built with high quality and in many cases to higher standards than machines from big-brand server companies.
Leveraging the Power of White Box
The power of white box is that they are fully customizable. Just as the big guys do, smaller companies can purchase white box servers from a vendor like Equus to meet their exact specifications. Perhaps a company needs lots of storage space, but not much compute power. Perhaps a company wants dual high core count CPUs and numerous expansion slots built into the motherboard to anticipate growth. A legacy server company cannot offer servers optimized in these ways. But a white box vendor can do exactly what a buyer wants, and build them a server that has, for example, 8 SSDs and 8 rotating disk drives, all in a 1U form-factor chassis. This kind of hybrid storage server is actually quite common among white box buyers, and is simply one example of how white boxes can lead to total hardware optimization.
Once an enterprise has made the leap forward to using white box servers, virtualization is the next method to use in order to emulate the successful methods of the big guys, such as Google. The recent progress in hardware virtualization, largely spearheaded by VMware, has
enabled the development of the Software Defined Data Center (SDDC), an entirely virtual data center in which all elements of infrastructure – CPU, security, networking, and storage – are virtualized and can be delivered to users as a service. The software-defined data center enables companies to rely no longer on specialized hardware, and removes the need to hire consultants to install and program hardware in a specialized language. Rather, SDDCs allow IT departments to define applications and all of the resources they require, like computing, networking, security, and storage, and group all of the required components together to create a fully logical and efficient application.
One such virtualization software package that can enable the effective use of an SDDC is VMware vSAN (virtual storage area network). A vSAN is a hyper-converged software defined storage software product that combines direct-attached storage devices across a VMware vSphere cluster to create a distributed shared data store. vSAN runs on x86 white box servers, and because vSAN is a native VMware component, it does not require additional software and users can enable it with a few clicks. vSAN clusters range between 2 and 64 nodes and support both hybrid disk and all-flash disk white box configurations. This combines the host’s storage resources into a single, high-performance, shared data store that all the hosts in a cluster can use. The resulting white box based vSAN SDDC has much lower first cost and up to 50% savings in total cost of ownership.
Cost Optimizing a Software Defined Data Center
Another strategy smaller companies can use to emulate the big guys is to cut licensing costs by utilizing VMware intelligently on their white box server. For example, if a company uses a standardized server from a legacy manufacturer that comes with two CPUs out of the box and has to run the legacy software that comes with the server, they may end up only using 20-30% of their total CPU capacity. Despite this, that company will still have to pay for the software licensing as if they were using 100% of their 2 CPU capacity, because legacy software used in standardized servers is usually deployed using per CPU (socket) pricing with no restrictions on CPU core count.
If that company instead uses a custom white box with only one CPU with a high core count, and runs VMware, they can effectively cut their licensing in half, as VMware uses a socket licensing policy. Cutting licensing costs in half will often constitute a large amount of savings for a company that they can spend elsewhere to further optimize their servers. This utilization of virtualization software, as well as using it to put virtual back-ups in place, are both key ways in which smaller companies can approximate the methods used by the big guys.
Be Like the Big Guys in your IT Infrastructure
Google, Amazon, Facebook, and others do many things with their server infrastructure that not-as-big companies can only dream about. However, companies can emulate the big guys in the server world in significant ways. Just like the big guys, smaller companies can use custom white box servers to fully optimize their hardware. Smaller companies can also utilize virtualization software to save large sums of money in the form of virtual storage servers and in cutting licensing costs. The result of companies using these methods will not rival the scale of the huge data centers used by the big guys, but in substance, the result will be the same: your own high efficiency cost-optimized software-defined data center.
In this issue of Digitalisation World, there’s a major focus on the advent of 5G technology. Where 4G perhaps underwhelmed, there seems to be no shortage of excitement surrounding 5G technology and its potential to transform the enterprise. As one contributor puts it: “5G is a technological paradigm shift, akin to the shift from typewriter to computer.
Part 2 of this article covers a Q and A with Deborah Sherry, the Senior Vice President and Chief Commercial Officer, GE Digital.
Why the excitement over 5G, when 4G seems to have been somewhat underwhelming?
5G has an exciting role as an enabler of wider technology and enabler of the next era of technological advances. For example, recently everyone talks of using the new 5G networks for autonomous driving, this is an area where 5G is really growing. 5G is still in its early stages, no one knows yet what 5G for the entertainment, healthcare and emergency services which mean. One thing is certain though – it’s very exciting to see what it brings.
The advantages of 5G include faster data rates?
Yes, as output of data increases - technology must develop alongside it to ensure linearity. As more companies adopt IOT, more data than ever is now being transmitted every day. In GE Digital’s case - Industrial Internet of Things (IIoT) already generates petabytes of data. I believe that we must have the fastest data rates to be able to cope with the growing demand on the IIOT, which is where the role of the new 5G technology is indispensable. In addition to this, much has been said of 5G as an enabler of advanced new technology such as the IOT, but it’s the fantastic contribution to traditional cellular services like broadband where it can provide the highest speeds.
More network virtualisation – allowing for improved infrastructure management and sharing?
Absolutely. The falling costs of adopting cellular networks has led them being more commonly wide spread.
We have an example where GE worked with Vodafone Healthcare to connect devices in hospitals to enable remote diagnostics and other eHealth services.
It previously took months to install and initiate remote connection, Vodafone’s technology will be used to connect equipment such as GE’s scanners to GE’s Industrial Internet Predix. 5G will provide a huge boost in speeding up repair time, therefore lessening the time that malfunctioning medical equipment is out of action. GE engineers will be able to evaluate equipment status to detect problems, and either remotely repair devices or arrive with an understanding of the problem and the correct parts.
Greater density – more devices on the network?
Of course, the more 5G starts to develop the less companies will have devices connected by cable. The capabilities for 5G device connectivity is vast. Even prior to 5G, cellular capabilities were impressive. At GE, we are doing a lot of work with a large FMCG company by connecting more than 40,000 devices using cellular connectivity provided over a private wireless network. By leveraging secure private wireless networks, we deliver to customers faster access, lower cost of deployment and faster development.
And for the end user, there’ll be a noticeable jump in bandwidth?
Yes, 5G can provide a noticeable jump in bandwidth, it can allow the end user to download an entire movie in just 2 seconds.
The bandwidth won’t just affect the end-user. It has been shown that with the availability of more bandwidth, we also note the development of applications that otherwise would not have been possible with less bandwidth, streaming is an example of this. Greater bandwidth will impact the end user, but, its impact will not be limited to the end user.
Absolutely, lower latency and improved connectivity, this will help roll out other technologies and services, for example remote monitoring and operations, augmented reality and advanced cloud analytics.
Despite this, there will still be a need and demand for Edge computing, which is needed for critical assets where decisions need to be made and reacted to in milliseconds, where no connectivity problems can be risked. At GE Digital we have seen this within our partnership with Schindler who equip building with infrastructure such as escalators and elevators. Schindler use Edge computing to keep data flows and raise alarms in the event that there is a power shortage in the building. The Edge computing connects Schindler to the internet in order to get the most from their products and predict any problems with them prior to the product being badly affected.
And a greater range – i.e. better connection in remote locations?
GE technology is currently being used to light up the Harbour of LA and in the future 5G networks could be used to build such private high-speed networks for industrial sites and factories in remote locations.
The network coverage can be further extended to the communities living in the surrounding area. 5G will enable technological progression in the cities as well as being able to scale it to more remote locations.
Presumably, 5G is massively important for the development of IoT solutions?
Global telecom carriers are gradually transitioning from communication service providers to digital service providers (DSPs) to improve enterprise revenue growth. The shift towards DSP assumes significance because the 5G revenue growth will be primarily driven by IoT connectivity services.
5G is massive for the development of IOT solutions, IOT is reliant and speed and large data sets, all things that 5G contributes to.
And what other technologies will it enable/enhance?
In addition to this 5G will be key to driving growth in certain areas such as smart city infrastructure (traffic control, autonomous cars), emergency services (remote surgery), which will be developed in conjunction with industrial organisations.
The big question is, when will 5G be enterprise ready?
There are plenty of potential use cases for 5G but currently this technology is at very early stages. It will probably take about 5 to 10 more years until we start to see viable business models and large-scale implementations of 5G.
Especially as 5G requires new radios and chips in mobile devices and new software to handle the communications?
Mobile devices with 5G chips are expected as early as 2019, but it will take a long time for the cost of the 5G chips to fall to a price point to make putting one in every machine viable, they’re going to be expensive. Real time applications could take advantage of 5G pretty quickly as a lot of today’s software will benefit from the lower latency of 5G but it’ll take time for software to be upgraded to truly take advantage of the benefits of 5G.
Any other snags or pitfalls out there in the 5G roadmap?
Some of the key challenges are related to cost and roaming, connectivity and device management across multiple markets.
Cost and roaming are two big challenges face because 5G has worse coverage than 4G meaning more base stations are needed to cover the area.
The aforementioned cost of mobile chips.
Could you share one or two examples as to the potential of 5G in one or two specific industry sectors?
A lot of our power customers are using APM and connecting more devices and sensors to their APM management platforms. As the cost of computing falls, businesses are starting to see the benefits of adopting cellular connectivity to manage connected devices. Using cellular gives organisations, the ability to understand the usage models for connected devices and the total cost of ownership. Moreover, this allows organisations to tailor services to their customers’ needs by better understanding how they are using their products and tailoring the service to their needs. As more devices get connected to the factory floor, there will be a stronger need for adopting cellular solutions that power connectivity across their assets and workforce.
Our day to day lives are shaped by modern technology. No longer do we need to go and get lunch, Deliveroo will deliver it to our offices; we don’t need to hail a cab, an app can order one to our door; Amazon will deliver anything from a book to a fridge freezer, direct to our homes.
By Mark Flexman, DXC Fruition Practice Lead, DXC Technology.
Within reason, almost anything can be ordered through a seamless app without human interaction. However, once inside their workplace, employees are exposed to a very different experience. Rather than internal services delivered through platforms like apps or portals, employees need to use e-mails or pick up the phone, or locate and talk to other members of staff in order to get things done. Requesting holiday, booking a meeting room, or just ordering office supplies can end up being an arduous task.
With research finding the average organisation is only 40% of the way to providing fully mature internal services (known as ‘everything as a service’ (EaaS), there is a major opportunity for businesses to improve service delivery while ensuring higher productivity, lower costs, and greater employee satisfaction. According to research by ServiceNow, managers rate consumer service platforms 103% higher than workplace services. This dissatisfaction is driven by the use of outdated technology in the workplace; 48% of workplace services are ordered via email, compared with 10% of consumer services, according to the same study. Similarly, only 22% of workplace services can be ordered and tracked via mobile devices, compared with 65% of consumer services. There is a huge gap between employee expectation and the services offered by employers.
‘Consumerising’ enterprise services
At best, only in 21% of organisations can all services from departments such as HR, IT, Finance, Facilities, and Legal be consumed in a self-service manner – a key element of ‘consumerising’ the employee service experience. In addition, only 14% of organisations have fully automated service provision and only 23% have a consistent way for users to interact with internal services providers. This is in stark contrast with consumer offerings where services are constantly being expanded to include self-service management. The knock-on effect of this lack of service maturity means failing to offer unified services through the cloud is costing organisations huge amounts of money as they rely on using different tools to offer similar services across the business.
Delivering EaaS is far more than a question of keeping employees happy with the latest technology. There is also the potential to drive significant return on investment (ROI) through delivering services in a joined-up, automated, online way. These include improved efficiency of operations, as well as better productivity from staff due to time saved when making and tracking service requests. Most importantly, businesses will find that their service availability is much improved by limiting downtime because of having a single consolidated service automation platform.
Four steps to making EaaS a reality
For those CIOs keen to take advantage of these benefits, the following four steps give a good outline of how to successfully implement EaaS:
The consumerisation of all workplace services won’t happen overnight, but there are many elements that can be put in place relatively quickly to make a huge difference. Businesses should be acting now to see which elements they might already have in place and can be expanded, as well as setting out a roadmap for further change. Doing so will help them to benefit from happier employees as well as savings in both time and money and much better service availability.
In this issue of Digitalisation World, there’s a major focus on the advent of 5G technology. Where 4G perhaps underwhelmed, there seems to be no shortage of excitement surrounding 5G technology and its potential to transform the enterprise. As one contributor puts it: “5G is a technological paradigm shift, akin to the shift from typewriter to computer.
Part 3 of this brings the thoughts from Lindsay Notwell, Senior Vice President of 5G Strategy & Global Carrier Operations at Cradlepoint
With the first commercial iterations of 5G now gradually starting to appear, we’re on the verge of the next revolution and all the benefits it will bring. The launch of 5G is expected to have a significant impact, offering speeds up to 10 times faster than 4G and network latency in the single digit milliseconds. It’s expected to unlock the true potential of a swathe of fledgling technologies including virtual reality, remote-controlled robotics, telemedicine and autonomous cars.
Taking stock – 4G’s not dead
It’s worth noting that the arrival of 5G doesn’t mean investment in 4G will come to an immediate halt. The older standard will continue to evolve and improve for the foreseeable future, making its integration with 5G even easier and more consistent from both a carrier and customer perspective. 5G is also designed to work hand in hand with 4G, rather than usurp it. This means the majority of carriers will be able to use a combination of both 4G and 5G throughout their networks to give customers the best possible service at all times.
A full industry transition will take some time to complete. Mobility will also be limited by the absence of mobile-site handoffs (the process of changing the channel associated with the current connection while a call is in progress), meaning the initial framework will only provide fixed wireless access. As a result, consumers should expect to see a gradual rollout over time, starting with fixed locations before mobility is added later.
5G will require a new framework
5G is an exciting technology that’s widely expected to deliver on its promise of unrivalled wireless speed and consistency. However, in order to handle its massive throughput capabilities and scalability, it will be critical for providers to utilise Software-defined Networks (SDN) rather than the more traditional mainframe approach.
SDN enables new functionality on software rather than a hardware-constrained timeline, making networks much more agile and efficient. With the world predicted to contain more than 50 billion connected devices by 2020 – all collecting and transmitting data using 4G and 5G – SDN is the only viable way to keep networks up and running. Many forward-thinking organisations have already been extensively testing and deploying SDN as a means to lower costs and increase bandwidth across their corporate networks.
When it does arrive, 5G will accelerate the development and adoption of many fledgling technologies, but perhaps the most significant of all is the Internet of Things (IoT). With mobile wireless connectivity at the heart of the IoT, 5G represents the ideal way to bring people and data together, regardless of where they are or what they are doing. 5G will help the IoT to reach its fullest potential by transporting intelligence, processing power and communication capabilities across networks, mobile devices and connected sensors more efficiently than ever before.
Comments from Sylvain Fabre, Senior Director at Gartner:
Definition: 5G is the next-generation cellular standard after 4G (Long Term Evolution [LTE], LTE Advanced [LTE-A] and LTE-A Pro). It has been defined across several global standards bodies — International Telecommunication Union (ITU), 3GPP and ETSI. The official ITU specification, International Mobile Telecommunications-2020 (IMT-2020) targets maximum downlink and uplink throughputs of 20 Gbps and 10 Gbps respectively, latency below 5 milliseconds and massive scalability. New system architecture includes core network slicing as well as edge computing.
Position and Adoption Speed Justification: Gartner expects that by 2020, 4% of network-based mobile communications service providers (CSPs) will launch the 5G network commercially.
The 3rd Generation Partnership Project's (3GPP's) Release 15 enables commercial network infrastructure based on the earlier New Radio (NR) specification launched by the end of 2018. NR allows CSPs to launch 5G with only new radio access network (RAN) deployments, leaving the existing core intact. 5G core and edge topology also need to be added to realize the full benefits of 5G, this may occur later toward 2022 to 2025.
Examples of early CSPs' 5G plans include:
From 2018 through 2022, organizations will mainly utilize 5G to support IoT communications, high-definition video and fixed wireless access. (See "Emerging Technology Analysis: 5G.")
Use of higher frequencies and massive capacity, will require very dense deployments with higher frequency reuse. As a result, Gartner expects the majority of 5G deployments to initially focus on islands of deployment, without continuous national coverage, typically reaching less than full parity with existing 4G geographical coverage by 2022 in developed nations.
In addition, slower adoption of 5G by CSPs (compared to 4G) means less than 45% of CSPs globally will have launched a commercial 5G network by 2025. Uncertainty about the nature of the use cases and business models that may drive 5G is currently a source of uncertainty for many CSPs.
5G: we will notice the difference, says Kevin Deierling, VP Marketing, Mellanox:
What everyone knows about 5G is that it will be faster. So what? Things are always getting faster – you just don’t wait so long for the screen to come up. But there is a tipping point: the point where connection becomes so fast that you reach for your mobile device for everything, without hesitation. Services respond so fast that you feel like you are right in the Cloud. 5G promises to take us to the tipping point, and profoundly change the way we behave.
To achieve this and more, 5G transforms the way mobile cells are established and upgraded. Every advance, from 2G on, meant replacing cell tower radios. These were expensive devices with internal compute resources to process and translate signals. 5G radios simply receive the radio signals and digitally stream raw data for processing in a data center. This allows smaller, cheaper, low power radios, and many more small 5G cells can be installed.
Major processing and system upgrades can now all happen centrally, so new 5G services can be added or enhanced just as fast as software can be loaded in the data center. This is the sort of agility and market responsiveness that is demanded of today’s business: so 5G really opens up the market for innovative services and business initiatives.
5G’s massive increase in bandwidth serves a massively greater number of devices. In the workplace this could include integrated security, access, fire alarm, environmental control, electricity saving and other smart services. It can also connect individuals’ ‘wearable technology’ like smart watches, fitness and health monitors, as well as a limitless number of industrial, agricultural, traffic and environmental monitoring and control devices. How much this will impact the workplace depends mainly on management’s imagination: how fast they grasp the potential for smarter building, campus and smart city operations.
Each such device makes different demands. A smart meter must be 100% reliable, while merely transmitting a few bits of non-time-critical data. A financial trading system may depend on sub-microsecond latencies, while a self-driving car must not only be 100% reliable, it must react at least as fast as the most skillfull human driver. Thanks to 5G’s central processing, its huge bandwidth can be ‘sliced’ to match the needs of services as diverse as consumer entertainment, telemedicine, industrial control systems, road traffic management, checkout systems and many more.
5G will support richer multimedia experiences – integrating location based services with social networking, virtual and enhanced reality. Sporting events will be viewed in real time, offering a choice of perspectives including virtual reality participation. This includes 4K video and super slo-mo – with great potential for training and research.
5G, plus human imagination, will transform the workplace.
Fail fast, learn quickly and ending lengthy hand-over procedures between in-house development and testing teams was key to ghTrack® becoming Europe's largest data sharing platform for the supply chain says Finn Normann Pedersen, Senior Product Manager, GateHouse Logistics.
According to a recent survey from research firm Statistica, only 17% of software companies have fully embraced the so-called "DevOps" culture in their workplaces. By early 2018, a further 22% percent of software firms had started their DevOps journey, and only three percent had never even heard of it.
Industry analysts are united. Research from Gartner, for example, calls for those responsible for application development strategies in digital business initiatives to ensure that quality is a team focus by making use of communities of practice or guilds to discover, share and evolve best practices. Gartner believes that shifting the focus of QA from quality assurance to quality assistance is the way ahead.
The DevOps culture is a perfect example of how ideologies tend to shift very quickly in IT and keeping a software business agile is one of the most important ingredients of a successful company. If a company can adopt, test and release new applications in the shortest timeframe, it gains business advantage.
Let's dig deeper and look at the underlying reason why DevOps has become much more than just an enterprise buzzword.
A development team finishes building a new application and sends the code to QA. After a period of testing, QA sends the code back to the Dev team with all identified bugs. This is where the hand-over problem and culture begins. Developers get angry claiming that QA didn't "test" properly or look at "the bigger picture," while test people insist that "it's not our process, it's your code."
After let's say another week of going back and forth, the bugs are fixed and the code is shipped to the operations team for deployment. Dev/QA says everything worked great in "their" environment, but when the application is deployed in production, it creates contention with neighbouring workloads in a shared virtualized environment. The production environment is de-stabilized.
This lack of cohesion or trust among various business teams sparked the roots of the joined-up enterprise DevOps culture and heralded a new era of software development.
The DevOps movement was born around 2007 in Europe with the fundamental idea that there needed to be a direct connection between developers and operations and for them to meld into a joined-up connected process flow.
A DevOps program typically involves three groups … developers, operations and test people. With the program, the product "team" has all the needed competences to deliver business value, which explains why it fits "well" with the DevOps concept. Agile and LEAN processes typically focus on efficiency, automating work and building quality into work processes in a cyclic manner.
While DevOps is a friend of Agile, the main reason for adapting Agile for DevOps is to use it for continuous software delivery. At a stroke, developers can fail fast, learn quickly, and deliver more business value in shorter timeframes.
Continuous delivery is a development practice that involves deploying code to a shared repository several times per day. Using an automated build process in combination with automated testing helps to verify each check-in, which produces more stable software. It allows developers to identify problems earlier.
Due to the agile nature of a DevOps environment, DevOps teams can introduce new functionality in smaller, more modular deployments. Because these deployments are more targeted and isolated, bugs are easier to spot and in turn, fixes are often faster and easier to implement.
The team only has to check the latest code changes for errors to be able to fix the issue. This brings considerable benefits to a software business. Being able to implement bug fixes faster keeps customers happy and frees up valuable resources to concentrate on other tasks such as designing, developing and deploying new functionality.
The combined use of version control systems, continuous integration, automated deployment tools and test-driven development (TDD) allows DevOps teams to implement smaller incremental change sets.
Because of these more modular implementations, DevOps teams can expose problems in configuration, application code and infrastructure earlier, as responsibility isn't passed to another team once coding is complete.
With small change sets, problems tend to be less complex and resolution times for issues are faster simply because the responsibility for troubleshooting and fixes remains contained within a single team.
Software companies need to invest heavily in automation of testing and automation of software deployment to customers. By implementing a DevOps approach, they can reduce significantly the costs and demand for resources associated with traditional IT implementations.
Traditionally, IT was seen as a cost center but the implementation of DevOps has shown that this approach provides real business value. When you use continuous delivery and lean management practices, you get higher quality results and shorter cycle times, further reducing costs.
There is an enormous amount of waste in traditional IT environments. Time is often spent waiting for others to complete tasks or in solving the same problems time and again, and this causes frustration and costs money.
Standardized production environments and automation tools help make deployments predictable. These processes free up people from routine tasks, allowing them to concentrate on the most creative aspects of their role and adding value to the business.
The type or complexity of systems these practices get applied to is not a significant factor. As long as software is architected with testability and deployability in mind, high performance is more than achievable.
Puppet Labs research showed that DevOps IT teams significantly increase performance over non-DevOps teams. Specifically, they:
By 2020, Gartner believes that DevOps initiatives will cause 50% of enterprises to implement continuous testing using frameworks and open-source tools.DevOps is no longer an enterprise buzzword. If DevOps laggards fully embrace the concept, they too will soon gain business advantage.
Wearables are experiencing one of the longest gestational periods of any technology. For a long time, gadgets like Google Glass and the Apple Watch were perceived by many as being a bit “emperor’s new clothes”: intriguing conceptually, but disappointing functionally.
By Rufus Grig, CTO, Maintel.
No-one denies the impressive engineering that goes into these slick, shiny products – it’s just that they have always lacked a convincing use case. Despite spending billions on advertising, for example, Apple’s watch has never evolved into anything other than a fancy fitness tracker. The technology was smart, but didn’t offer such striking benefits to make it a “must-buy” for consumers. One of the biggest problems that manufacturers have experienced is matching form and function; for example, the idea of trying to cram a smartphone onto a watch with a 1.5” screen probably needs a re-think.
That’s why we think the office is where we’ll see the next wave of wearable technology, since the use cases and their benefits are becoming much clearer. While no-one imagines (or desires) that we’ll be walking around the workplace wearing VR goggles all day in the near future, early experimentations with wearables from Google, Apple and others has helped to show us where they could bring significant value to our working lives.
The key seems to be not to try to cram too much functionality into each individual device, but to use them for specific business applications. While unlike Amazon most businesses have little desire to harness wearable technology to track individual employees, there are an increasing number of task-specific use cases where the value of wearables is beginning to be realised.
Take augmented and virtual reality, which simulate realistic scenes, images, sounds and experiences in real-time via wearable headsets or smart devices. Architects and interior designers, for example, can now offer clients multi-dimensional, high-detail visualisations of sites to clients, facilitating faster and more targeted feedback, improved and streamlined collaboration and more satisfactory output. This is equally helpful for sales reps visiting customers onsite or in their homes, helping them to visualise the finished product and giving them the confidence to proceed with their purchase.
In another example, augmented reality (AR) and virtual reality (VR) continue to grow exponentially within the healthcare sector, with the first virtual reality operation to remove cancerous tissue taking place in 2016, VR headsets improving quality of life for the terminally ill and surgeons using AR to accurately pinpoint blood vessels during surgery. The use of bodycams on police and security personnel, meanwhile, has become so commonplace that we often fail to register that this is in fact another example of wearable technology in action.
But wearables also have applications for day-to-day office-based work. Take the issue of disasters such as a major accident or attack. Today, large organisations still find it very difficult to account for employees’ whereabouts during the chaotic hours following a major disruptive event. Wearables enable organisations to know workers’ exact location and status in real-time, bringing clear benefits in the event of disaster.
Meanwhile, anyone who regularly sits through conference calls will welcome the advent of special meeting headsets. Incorporating directional audio that makes it sound like you’re in a physical meeting, these devices will provide a much more lifelike experience, helping users to distinguish between speakers and cutting down on cross-talk and interruptions. Wearables for meetings can also be augmented with functions such as facial recognition and calendar integration.
Wearables in the workplace aren’t just about meetings, though. There is a fast-growing awareness of the importance of employee health and wellbeing. Organisations are learning that providing a bowl of fruit isn’t enough to keep workers fit, active and healthy, which is why we will see more businesses embrace personal fitness and happiness applications linked to employees’ wearable devices.
Researchers have found that using wearables to monitor and improve employee health can benefit productivity by almost 10 per cent, which provides a very clear business case for investing in (or at least investigating) wearables for the workforce. They can also be used to monitor remote-working employees, for example sales reps or field service workers, to track their progress on jobs.
Of course, there are questions around employee privacy, not least over data relating to location and health. In January it was revealed that US soldiers had unwittingly exposed military information through the use of unsecured wearable devices. But security risks can be solved relatively easily with the right corporate policies, developed with the active involvement of employees themselves. Of more concern is the apps themselves. Businesses may wish to choose from apps that are already available on the market, or they may prefer to develop bespoke applications that fit in with their strategic aims.
The challenge here will be how to avoid overburdening employees with a plethora of communication, fitness and productivity apps, and instead combine as many functions into as few apps as possible. This is a task that will be beyond most businesses, pointing to a potentially highly-lucrative new market segment for those with the skills to exploit the opportunity.
It’s difficult to see how wearables won’t change our lives in some regard, once we have mastered the marriage of form and function. Rather than being a fad for the trendy early adopter, wearables could be the perfect fit for the workplace.
5G is going to unlock a number of opportunities that are just not feasible today – a couple of the areas that are likely to see major transformation are smart cities and smart transport. The potential of an ultra-fast, ultra-reliable network, capable of handling massive machine-type communications (mMTC) could radically alter the way in which the IoT is used. Smart offices and self-driving cars can revolutionise the way we live, but will be incredibly problematic if they frequently lose connectivity, or if connection speeds are to slow to respond. If 5G solves these issues, we could see rapid adoption of these technologies.
Its impact on existing network traffic will be enormous. 5G will mark a turning point as network architectures move away from the physical and into virtualised environments. 5G will require a highly flexible, scalable and modular core network architecture – meaning networks will require a much higher degree of programmability and automation than exists today. Network slicing will also be greatly improved, allowing networks to be logically separated. Each slice will be capable of providing customised connectivity, with all slices running on the same, shared infrastructure. The practical upshot of these changes is that the network will be able to cope with a far greater range of business offerings that otherwise wouldn’t have been economically or technically viable.
This is why SDN and NFV are going to be absolutely central to 5G. Today, networks are flat and everything goes back to the core – but if MNOs are to achieve the 1 millisecond of latency that 5G promises, that just isn’t a feasible path forward. Instead, 5G will need to cache content locally. This will require an SDN controller to manage the traffic and orchestrate how devices communicate; hundreds of nodes may send data to an SDN controller which will allow traffic to be shifted dynamically based on need. However, this is only possible in a virtual environment. Therefore NFV, by helping to virtualise all the various appliances in the network, is also going to be critical to developing networks that can cope with 5G.
5G promises to provide rich new opportunities for enterprise users and beyond, says Jean-Francois Rubon, Strategy Director, Gemalto:
The fifth generation of mobile connectivity will give organisations unprecedented opportunity, flexibility and choice in the networking tools they can use to capture and store data and then extract insights from it to drive their digital operations.
5G is a platform for digital transformation and disruption that promises to automate many aspects of our lives. It will enable the cellular connectivity of 1 billion connected devices by 2023 around the world used by both businesses and consumers. This includes the mobile internet people use and the mobile Internet of Things required by connected cars, meters, health care monitors, smartwatches, sensors, manufacturing machines and more. The scope of work could also significantly change, allowing businesses to work remotely and connect to people, devices and locations they haven’t been able to previously.
The greatest opportunity available to enterprises from 5G is the insights delivered from combining analytics, machine learning and AI, with the capabilities of the 5G network. Most businesses will want to leverage 5G to capture, move and store the data from connected devices. 5G will enable enterprises that apply analytics to the data in optimal locations to drive automation in day to day operations.
However, some of the requirements that new 5G use cases impose on the storage, compute and network domains introduce sizable new risks to the confidentiality, integrity and availability of enterprise data. Almost ten years on from the launch of 4G, 5G will be the first cellular generation to launch in the era of global cybercrime – therefore mobile operators and enterprises need to take necessary precautions to ensure that only the right people and devices can access network resources. In a scenario where the bad guys manage to break in, organisations need to ensure that all the data is encrypted, therefore making it unavailable to hackers.
Observations from Andrew Palmer, Consulting Director, Telecommunications, CGI UK:
Social, economic and commercial benefits of 5G will be brought to life through increased network energy efficiency, which will be 100 times better compared to 4G. Peak data rates will also be enhanced up to 20Gbits/s, while spectrum efficiency will be approximately three times better.
The general consensus regarding where satellite technology can support 5G is that it will add agility to the satellite terminals. These terminals will then form part of a holistic 5G network, supporting broadband services, cellular mobility and local area networks. The main opportunities are seen as:
In the context of smart cities, the use of 5G over satellite will be a reliable mechanism to support mission critical applications, IoT/IoE use cases and connected vehicles. These all rely on guaranteed, low-latency communications and would be ideal to provide a consistent connection to a control core or application, as it is less likely to require multiple handovers between cell sites during a session.
One particular area of potential disruption, which does not require regulatory changes, is the development of neutral hosting companies. In the City Of London, O2 and Vodafone have deployed a wholesale Wi-Fi network based on access points on each lamppost, with capacity sold to anybody wishing to set up their own private Wi-Fi service in the defined area. In a similar way, a local council, utility and telecommunications provider could establish a consortium to provide the access network and access points across a city or region. This could then be sold as a wholesale service to people.
4G will continue to be a viable technology for as long as consumers perceive the level of bandwidth available to be acceptable, but as traffic grows, so does the expectation of greater bandwidth, reduced latency and enhanced reliability. Once this growing demand can no longer be satisfied by deploying more spectrum, network densification or re-use of 2G and 3G spectrum, then the need to move to 5G will become more pressing in order to reduce mobile broadband congestion.
Nick Offin, Head of Sales, Marketing and Operations, Toshiba Northern Europe, comments:
What are the benefits of 5G to the workplace?
The arrival of 5G will play a pivotal role in business mobility, enabling mobile office workers to reach a new level of innovation. With faster speeds than previous generation networks, research from Ericsson found 5G has already started to impact industry sectors – including manufacturing, healthcare, energy, and utilities – with 78 per cent of respondents agreeing the technology will improve the development of customer offerings.
The transition to 5G is set to affect businesses in many ways by enhancing their mobile and remote working capabilities, enabling staff members to work more quickly, more efficiently and productively while working from home, a client site, or in any location. 5G will also enable organisations to benefit from IoT solutions, helping to improve data management and transfer speeds of information that’s crucial to enterprises across all industries.
Toshiba's research shows that 5G was classified by IT leaders as a factor most likely to drive uptake of smart glasses within the industrial and professional sectors in the next few years. Enabling more scope for IoT solutions to come to the fore for remote workers, this technology could revolutionise the way we work in this 5G era. For instance, in a warehouse setting where field workers can benefit using smart glasses to scan barcodes, have product location assistance and use voice confirmations to run through checklists – reducing any errors and creating a more efficient overall process.
How will 5G impact the IT/data centre environment?
5G will make it easier and faster to identify security issues at device level, helping to keep threats away from the data centre. According to Toshiba, 48 per cent of IT decision makers prioritise data security as a key investment this year. With 5G becoming more prevalent, managing and securing data generated by M2M and IoT technologies could be perceived as a big security challenge of 5G. Mobile edge computing solutions process information at the edge of the network, helping organisations with a high proportion of mobile workers to reduce data garbage and any wider strain on cloud services.
The function of incident response is simple: to prepare for and respond to cybersecurity incidents. Many organisations are striving to make their incident response lifecycle faster and proactive, but there are some matters that complicate this particular area of security. Incident response is a critical process but vital information is often scattered across multiple sources, and speed is important – a delay in detection or response means greater loss.
By Karen Kiffney, Director of Product Marketing at Recorded Future .
What does an incident response process look like?
A typical incident response process includes five steps: Incident detection; discovery; triage; remediation; followed by information passed on to the relevant business unit for final action.
The goal is to move through this process quickly and reliably however, in doing so, today’s incident response teams face four major challenges: (1) the cybersecurity skills gap; (2) an increasing volume and sophistication of cyberattacks; (3) a slower response time due to overwhelmed analysts; and (4) the fact that the processes used often rely on disjointed technologies. Nevertheless, these do not have to be problematic, and threat intelligence can stop them being so.
The cybersecurity skills gap
The highly publicised cybersecurity skills shortage is hardly a new issue. For incident response teams, it manifests as a difficulty in hiring or replacing skilled personnel, a constant need for in-house training, and an inability to keep up with the rate of new incidents and alerts.
But threat intelligence can help with this. An analyst’s ability to respond to threats is proportional to their knowledge and experience. Threat intelligence arms analysts at all levels with the context they need to confidently respond to incoming threats, helping junior analysts to upskill as well as enabling the more experienced team members to stay up-to-date on the latest trends
Every year the number and complexity of incoming cyberattacks grows, and each time, an additional burden is placed on incident response teams. For example, the Central Bank of Bangladesh Heist. This cyberattack seemed to begin simply however, the criminals then used sophisticated malware to cover their tracks, rewriting database entries etc. This meant the attack became much more complex.
As volume and sophistication becomes greater, incident response teams are forced not only to resolve more incidents, but also to develop processes for new incident types. Threat intelligence provides incident response teams with the context and knowledge needed to respond effectively to known and unknown incidents.
Overwhelmed analysts and slower incident response
Incident response analysts are constantly overworked, not just because they are short in number and increasingly in demand, but also because of the sheer manual effort required to research and respond to each incident. The result is a progressively bigger time lapse between incident detection and response.
By providing analysts with the insights they need, precisely when they need them, threat intelligence can dramatically speed up the response process. At the same time, threat intelligence helps analysts at all levels perform beyond their experience level.
Relying on disjointed technologies
To respond quickly and powerfully to arising threats, incident response analysts often spend a great deal of time aggregating data and context from a variety of security technologies (e.g., SIEM, EDR, firewall logs, etc.) and threat feeds. This dramatically slows the response process and increases the likelihood of mistakes or missing something important. Take, for example, the recent Dixons Carphone breach. The Financial Times demonstrated that this “hack began in July 2017 but was only discovered and reported” in June 2017.
Genuine threat intelligence integrates seamlessly with existing technologies, providing analysts with the insights and context they need to make fast, accurate decisions. Instead of constantly switching between technologies and web browsers, threat intelligence enables analysts to focus their attention on just the intelligence they need to make decisions and act on them.
Key threat intelligence attributes to power incident response
Threat intelligence has the potential to add huge value, and time savings, to the incident response process, particularly during the first three stages: incident detection, discovery, and triage. By providing them with insights, threat intelligence helps analysts make decisions faster and more reliably. But for that to happen, threat intelligence must be:
A threat intelligence capability without the above characteristics is not intelligence at all, it is simply more data to add to the overwhelming quantity already available. Genuine threat intelligence, defined by the characteristics mentioned above, provides incident response teams with only the insights they need to make better decisions.
Digital transformation is increasingly becoming a top priority for CEOs around the globe--and rightly so. More and more CEOs are realising if they don’t embrace digital transformation and incorporate technology into the core strategy of the companies they lead, they risk being left behind. For enterprises sitting on the fence, they need only consider the closure of UK high street heavyweights BHS and Maplin as well as U.S.-based behemoth Toys ‘R’ Us to see the impact of failing to embrace modernisation programmes in time.
By Sean Farrington, Senior Vice President, EMEIA, Pluralsight.
Despite these unfortunate stories, many businesses are finding new ways to embrace technology and stand out from the competition. Be it through automating manual processes which British Gas is doing by transitioning traditional gas readings with smart meters or by championing convenience such as online supermarket Ocado. These are effective product strategies, but to be truly successful in the long-term, a company’s entire business proposition must be underpinned by technology. This means that digital transformation cannot be seen solely as a management and investor initiative but engrained across the entire workforce. It’s crucial that employees learn and engage with the latest technologies always with the goal of delivering new innovations to market.
McKinsey suggests that the demand for new skills will dramatically increase by 2030, requiring a 55% rise in the supply of equipped workers. Filling these positions, however, is not going to be enough as there are not enough developers to fill all the open jobs. Further compounding the problem is the fact that because technology moves so fast, today’s engineers, coders and developers are constantly behind the curve, which leaves a global technology skills gap. In fact, according to the Economist Intelligence Unit 94% of executives say there is a digital skills gap in their businesses and 59% of IT employees worry that their current skills will become obsolete.
The challenge is massive, but our ability to meet it head on is achievable. To do so will require an almost complete reskilling of the workforce. However, the last time there was such a significant shift was the industrial revolution—so, as a society, we’re a little out of practice. To fully realise the potential opportunity, businesses must rethink how to develop technology skills within their organisations in order to deliver on their business objectives.
The need for continuous technology skill development
First and foremost, businesses should adopt a culture of learning to confront this new demand for skilled workers. According to a recent report from Deloitte, organisations with a strong learning culture are 56% more likely to be first to market. They will also outperform the profitability of their peers by 17%.
Aside from just being a smart business strategy, fostering a culture of ongoing technology skill development is the only way to keep pace with the speed of technological innovation and digital transformation. Decades ago, there were two software languages. Today, there are more than 250 languages and they’re constantly being updated, sometimes more than eight times per year. In fact, Java had 5 updates in the last year and PHP had 20.
McKinsey finds that IT, programming and data analytics will become the most sought-after skills over the next three years. You can add to the list, cloud, security and mobile. And you can bet new skills will keep emerging that companies need to identify and stay ahead of to remain competitive. And so, developing the needed technology skills has never been so important to meet demand for these skills and ensure future business success.
Finding alternative routes to technology skills development
In-person classroom learning has been a staple to teach engineers and developers the latest technologies for decades, but it is outdated, expensive and ineffective. Leaders will find it’s very costly, both in terms of time and money, and it doesn’t scale. It’s also inflexible, as it assumes a one-size-fits-all approach without taking into consideration the individual skill levels of team members.
Employees learn best in a comfortable and supportive environment. They want to understand where their current skill level is, have a clear idea of progression, and personalisation for their development and goals. They also want to be empowered with the freedom to venture out and learn new technologies, frameworks, and tools that, while not necessarily assigned to their role, could unlock possibilities for their organisations in the future.
To take full advantage of this opportunity, employers need to look towards digital on-demand technology learning platforms to provide the type of learning enrichment employees need. These platforms combine skill assessments, course libraries, personalised earning paths and analytics to ensure that learners have access to the courses they need and want. They are easily scalable too, with courses taught by world-renowned subject matter experts and on-demand accessibility to learn anytime, anywhere on any device.
While there are obvious direct benefits to employees to hone their existing skills and skill up in new ones, importantly, on-demand technology learning supports company goals as well. Through personalised measurement tools, employers can understand their organisation’s skills gap and benchmark their workforce against industry standards, addressing learning needs in an efficient and targeted manner. As a result, a digital technology learning platform can help companies identify the latest technologies and then train their workforce with the right skills to keep pace with innovation and execute on their core objectives. This in turn better positions companies for profitability and competitive advantage.
We’ve entered an age where your technology strategy is your company strategy—it’s no longer a nice to have but a need to have. By adapting their business proposition to one grounded in technology, organisations can deliver innovation at scale and better meet customer needs. But in order to do so, they need technology teams that are equipped with the right technology skills to deliver against these goals.
Hiring new talent is a challenging avenue to pursue, and the well-documented skills shortage means that it’s more competitive than ever to recruit and retain technology professionals. Companies can overcome this obstacle by first embracing a model of continuous learning and then deploying technology learning platforms to ensure their existing teams have all the skills they need to thrive in their roles. Employees can be taught in bite-sized chunks, learn a new skill and put it into practice before going back to learn more skills, supporting talent mobility programs to shift and deploy team members strategically. By embracing a twenty-first century approach to developing technology skills and empowering teams with access to on demand technology learning platforms, CEOs can successfully navigate the skills mismatch and keep themselves at the forefront of innovation and in the heart of the customer.
Mobile is about to get faster, smoother and better with 5G. It is a more capable cellular standard that has positive implications for the Internet of Things (IoT). As the demand for data increases, 5G mobile networks are set to take on a support role by connecting elements of almost every business, allowing enterprises to offer new and better services, and shape new business models. A key advantage that IoT gives to business, which will be enhanced by 5G, is the freedom to work anywhere, breaking free of physical location. This allows enterprises to become more responsive, flexible and functional and business assets, be they objects or people, can be allocated without compromising the end result - improving outcomes and enhancing business relationships.
The new 5G platform will enable many companies, businesses and organisations to optimise their services and efficiency in a manner previously undreamed of. The current exodus from PC to mobile devices will be fuelled, and enterprises have no choice but to respond. This change is likely to be particularly visible in some sectors that are currently poised for IoT disruption, including healthcare, manufacturing, retail, and transport and logistics. In healthcare, for example, IoT is beginning to take off, but more importantly, its potential for future application is vast. From mobile devices and wearables that allow patient and physician to track health, to the provision of services like telemedicine and mobile clinics to populations in remote locations, the scope is massive and so are the potential benefits. If a patients’ wellbeing can be maintained within their community then the dangers of cross-infection and overcrowding in clinical facilities can be reduced, leading to better health, happier patients and staff, and reduced costs.
To summarise, the direction of travel is clearly for businesses to operate through mobile first technology, and the advent of 5G will greatly enhance the quality of service and opportunities this provides. However, the 5G rollout will also present challenges, such as GDPR compliance and infrastructure upgrades. Savvy enterprises are already strategizing their use of IoT in the 5G era and acting upon those plans. Whatever you do, make sure your business doesn’t get left behind.
For the enterprise, 5G networks will add fuel to digital business transformation efforts already underway, according to Paul Griffiths, Senior Director, Advanced Technology Group, Riverbed Technology:
Crucially, they will provide companies with vital insights as well as accelerating the speed at which business learnings from Artificial Intelligence, Machine Learning, and predictive analytics are applied across the organisation.
Furthermore, the integration between 5G and wireless (WiFi) connectivity will facilitate a more agile remote workforce, augmenting fixed network connectivity to improve user experience and application performance. Companies that shift to a software-defined networking approach to take advantage of these higher speeds will realise these benefits faster.
Businesses dependent on mobile technologies will benefit from access to higher speeds and the increased bandwidth that can be offered on a 5G network. But, like most other networks, 5G operates on a shared infrastructure, so, therefore, performance will not be guaranteed. However, network optimisation provides a more predictable user experience. Speed is a huge competitive advantage, so many businesses may opt to shift more, or all, of their operations from wired to wireless hardware options to make use of 5G.
A particular use case is with enterprises that work in the logistics space, as they will have the potential to benefit from superior autonomous vehicle networks. 5G will also be essential for autonomous vehicles, which will need to be able to communicate with the world around them (i.e. traffic points, insurers, vehicle diagnostics, mobile phones, other vehicles) as quickly as possible. As such, businesses will need to adapt and potentially have their own network aligned, which would help them gain near real-time insight into fleet diagnostics, telemetry and location.
Although the broad deployment of 5G may still be 3-5 years out, when it comes, the increase in bandwidth, providing more capacity and higher speeds, will lead to more data running across networks than ever before. With more data being transported, comes the need to understand where and who it is coming from.
It is the end-to-end understanding and assurance for proper operation of the ‘system’ which delivers these services that are critical to everything working smoothly, especially with mission-critical services such as driverless vehicles. Monitoring solutions that provide visibility and ensure the performance of the end-to-end communication path will become vital to delivering real business benefits.
Adam Nickerson, Principal Consultant at FarrPoint, comments:
Many people question what 5G will bring over previous generations of mobile connectivity. We expect to see higher data-rates, low latency and more reliable connections enabling more critical uses, such as roll-outs that form part of Smart Cities programmes.
There are two main types of workers and work places that will be impacted by 5G and we think this will start happening from 2020 onwards. Remote field operatives or mobile workers in, for instance, transport and logistics, construction, engineering, and health or blue light sectors will benefit massively from enhanced mobile connectivity.
Lots of these workers are already equipped with smartphones, tablets or rugged devices for connecting to applications in the cloud but 5G will enable those apps to be richer and more data-intensive, so there will be less need for workers to return to a base or office between jobs. Mobile productivity will be increased as a result.
But traditional, corporate workers will also benefit. The greater speed and capacity of 5G means those employees using 4G mobile broadband or WiFi hotspots today to work whilst travelling or from home will have an opportunity to replicate their office environment from anywhere globally. One simple example is using augmented reality (AR) or holograms for really immersive videoconferencing. 5G could support meetings driven by AR, where a worker sitting on a moving train equipped with a smart contact lens can interact with colleagues as an avatar – as though they are there in the same room. This would open the door to richer more collaborative remote working practices between different sites, nomadic workers and emergence of virtual offices impacting the corporate property market.
Thinking more broadly in terms of the enterprise IT environment, 5G will form a key part of hybrid corporate networks, which will be more available, have further reach , additional capacity, be more reliable and in some ways more intelligent due to being agnostic of geography. Essentially the network can be pushed further out towards where the user or consumer of IT works or wishes to work, which creates a distributed environment, with localised compute and storage in mini versions of the cloud ensuring performance is optimised. As a result, enterprises will have less and less reliance on traditional data-centres and should therefore review their IT infrastructure, hosting and cloud approach to accommodate this.
5G technology is likely to hold the key for mobile operators to maintain a strong position in the industry, says Philip Mustain, CEO, Co-Founder, Mobolize
Operators are experts in meshing together the different spectrum frequencies for seamless connectivity across their network, making 5G and IoT a reality. The gold dust comes when the is seamless connectivity across cellular and Wi-Fi networks which is important now more than ever in the work place with remote workers and large enterprises owning big offices that needs connectivity throughout
Employees frequently move between Wi-Fi and cellular networks and can become key champions for 5G if mobile operators maintain – and even boost – the customer experience.
These employee champions are users with a mobile device in-hand and who move to meetings throughout a city or region; business travellers in airports, train stations or riding electric scooters; employees on the job roaming out of their operator network region into another region; or those just taking a coffee break to check emails or connect with family.
The benefits of faster networks and continuous connectivity should be maintained for the work place mobile user as well. Smart connectivity enables mobile users to have a smooth handoff between Wi-Fi and cellular networks - meaning they don’t get caught in “dead zones” where smartphones cling to a Wi-Fi signal making cellular connections impossible. Maintaining Wi-Fi security is a growing concern and a good service can be a loyalty hook. As video increases in use and operators begin to own and promote content, smartly optimizing data for efficient use becomes an advantage to both the user who enjoys unthrottled video and the operators who can better manage their network.
The industry’s progress to 5G in the workplace is important and exciting, but there’s a key need to support and enable the work place mobile employee on the path to 5G, and upon its arrival. The work place mobile user can’t become a forgotten second citizen to operators as they focus on using 5G for enabling smart cities.
Thoughts from Iain Sherman, managing director of National Network Services at KCOM:
1) What are the benefits of 5G to the workplace?
5G will provide businesses with the blend of low latency communication, enhanced mobile broadband and high volume connectivity so it’s clear to see 5G technologies will play a big part in delivering benefits into business workplaces examples include working in the cloud, UHD conferencing, automation, smart offices and new business concepts such as augmented reality.
2) How will 5G impact the IT and data centre environment?
One major deliverable of 5G is the low latency and it’s this low latency that inevitably drives the requirement for your cloud services to be a lot closer to the edge. Along with the predicted explosion in end points and the highly distributed throughput, it’s obvious that the impact on the data centre environment is massive.
3) How will 5G shift the way traffic and data is managed?
5G combined with Software Defined Networks (or 5G SDN) will generate new business models and deliver new growth areas for enterprises because such an architecture will be dynamic, highly manageable, and cost-effective, making it perfect for the dynamic, high-bandwidth nature of 5G use cases.
The humanoid robot has been the go-to concept in science fiction for decades – but as the technology appears to be on the cusp of becoming a reality, have other areas of innovation already overtaken the vision? Indeed, are AI and machine learning, theoretically core solutions in the humanoid concept, actually making the humanoid robot redundant before it even comes into being?Nick Thompson, Managing Director, DCSL Software asks, in a world of self-driving cars and intelligent Personal Assistant technology, why would we need humanoid robots?
Science Fiction Stalwart
The idea of the human like robot continues to intrigue, from the film Bladerunner to the Channel 4 drama Humans. And in a developed world struggling to find people willing to undertake low paid, mundane tasks the idea of a non sentient – yet familiar looking – identity able to pick up the slack offers theoretical appeal. But just consider – in a world increasingly dominated by automation, by sensors and intelligent technology leveraging extraordinary innovation including computer vision and augmented reality, what tasks are these humanoids still required to undertake?
Why, for example, will you need a humanoid robot to drive you to work when self-driving cars are ubiquitous? Especially when self-driving cars will not only be far safer but also enable a complete rethink of the road network, paving the way for less concrete and a re-greening of the environment. Where is the need for an automaton to undertake basic household chores when intelligent houses will automate so many of those tasks, from the use of IoT to re-stock the fridge to leveraging the progress in computer vision to recognise people, objects and locations and undertake washing and cleaning, gardening and decorating?
Even the concept of a humanoid ‘Man Friday’ is already outdated. Google’s Intelligent PA technology can make phone calls and book appointments on your behalf – all while sounding extraordinarily human-like; why wait for a robotic humanoid when a cloud based algorithm is already doing the job?
Even the companionship aspect of the ‘Man Friday’ relationship doesn’t stack up in reality – despite the compelling potential opportunities for using humanoid robots to solve the current crisis in both caring and companionship.
If this technology was available today – perfect; the current generation of elderly would be likely to resist direct interaction with technology and automation and would be far less distressed, especially those with dementia, by a robot humanoid carer indistinguishable from an actual human.
But this degree of actuality is way off – and by the time a humanoid robot carer has the maturity to safely meet this need, everything will have changed again. From in home health monitoring and strides forward in disease prevention and management, to the essential shift in attitudes to technology, reality will be very different. The younger generation is already completely inured to the concept of interacting with technology, so why the need to dress it up with a human-like wrapper?
The truth is that automation through IoT, AI and machine learning is already addressing the roles we perceived for humanoid robots, way before the robotic technology has reached the required level of maturity.
Find the Why
So, what next for humanoid robots – are they destined to be consigned to the scrap heap before even hitting the market?
There are, however, plenty of opportunities for the extraordinary innovations within the field of robotics to be embraced. Rather than creating complete humanoid robots, the ability to replace individual limbs, with robotic alternatives is extraordinary. When today’s robots already have the most amazing mobility, agility and speed, the outlook for those individuals who have lost limbs to regain dexterity is compelling.
From building houses with more reactive technology – such as laundry chutes into intelligent washing machines that can wash, dry and fold or intelligent lawn mowers that can not only mow but weed and clear up after the dog, to rethinking the way we travel to reduce congestion and air pollution and transforming the lives of those with damaged bodies through robotics, our focus must be on the why, on solving problems that truly exist, not fulfilling a science fiction fantasy.
For millennia, the world economy grew incrementally and slowly based on population growth and increasing trade across distances. Conversion of raw materials to finished goods was achieved through manual labour and processes – often through trial and error that could take centuries. After nearly 5,000 years of recorded history, the Industrial Revolution changed everything. Businesses that deployed factories and machinery, otherwise known as physical capital, achieved significant leaps forward in production. Productivity and output leapt forward, and the world got a little smaller.By Kara Sprague, SVP and General Manager, ADC, F5 Networks.
By the 1900s, the explosion of service-based industries meant that for many businesses the measure of corporate performance shifted to people, or human capital. Today, we’re seeing another major leap forward as more and more organisations embark on a digital transformation of their business, and increasingly the value of the modern enterprise resides in its applications and data.
It’s not difficult to argue that applications are, in fact, the most important asset of the digital enterprise. Consider a couple examples: Facebook has no material capital expenses beyond $15 billion a year in computing infrastructure and just under 30,000 employees – but has an application portfolio valued at more than half a trillion dollars. That’s larger than the GDP of all but 26 countries in the world. Netflix has no material capital expenses and roughly 5500 employees – with an application portfolio valued at $175 billion. To put that in context, Disney, among the world’s most iconic brands, operator of massive theme parks, and owner of a vast media empire is valued less at $160 billion.
Prior to F5, I spent 15 years at McKinsey preaching to clients that an organisation’s most important asset is its people. No longer. We’re in the era of Application Capital.
Mid-size organisations generally have several hundred applications in their portfolio. Some large banking customers I’ve met with have upwards of 10,000. And yet most companies I ask have only an approximate sense of the number of applications in their portfolio. Ask them who owns those applications, where they are running, and whether they are under threat, and the answers get a little fuzzy. No doubt these same companies have invested heavily in the management of their physical and human capital, but unfortunately the same cannot yet be said for their applications.
The implications of this are staggering. Security, consistent policies, compliance, performance, analytics, and monitoring (to name a few) are each complex, expensive, and competitive issues for an increasing number of companies with apps spread across a dizzying combination of data centres, co-los, and public clouds.
In our latest customer research, nearly nine in ten companies reported using multiple clouds already, with 56% saying their cloud decisions are now made on a per-application basis. If you extrapolate, you can imagine hundreds of permutations in which companies’ apps have widely varying levels of support.
The implications leave many valuable corporate assets poorly supervised at best, and vulnerable to malicious attack at worst. Given the enterprise value attributable to applications, it won’t be long, in my opinion, before more companies finally start devoting a commensurate level of energy and resources to managing and monitoring their application portfolios.
Principles for an Application World
So how do we get there? When I talk to customers, I often focus on three core areas – principles to help them maximise the value of their application capital. These principles are neither unique nor inconsistent with how businesses managed capital in both industrial and services-based economies. The challenge is applying them, in the digital age, to the development and management of our applications. How do we take the rigor and discipline that have been ingrained in us around the management of physical and human capital and apply it to this new context?
1. Focus your developers on differentiation. In the realm of physical capital, manufacturers deploy that capital to create global supply chains with a precision and efficiency that becomes an asset for their business. In the digital age, this means the right people should be doing the right work to accelerate time to market for applications and maximise investments. Developers should be empowered to focus on delivering business value, unencumbered by concerns about availability, stability, security, or compliance.
2. Choose the best infrastructure for the application. Just as different occupations feature specialised work environments – consider chefs, architects, athletes – applications too have a natural habitat. One size does not fit all – work with the vendors and partners that best meet unique needs. Vendor lock-in is a thing of the past. Open architectures, APIs, and commoditisation of infrastructure now mean that customers have the power to choose an almost infinite mix of solutions, services, and even features to build, deploy, and support their application infrastructure.
3. Use consistent application services across your portfolio. Industrial companies managed regular maintenance of machinery and ensured the physical security of their factories. Services businesses invest heavily in HR and corporate wellness programs in order retain critical talent. Applications need services, too. However, services that support the delivery and security of applications can often add complexity and are applied inconsistently or not at all. Application services should be low-friction, easy to obtain, and efficient to manage across increasingly complex and sprawling application portfolios.
Application capital is already the primary driver of differentiation and value creation for modern enterprises. Yet few are devoting the appropriate level of energy and resources to managing and monitoring their application portfolios.
The effective management of this application capital is what will propel the next Amazon, Google, Microsoft, or Netflix. Not how many physical assets they deploy in their infrastructure, warehouses, or showrooms; nor even how many employees they amass. The real competitive differentiator will be found in their applications. Applications will drive the fastest growing revenue streams, creating significant shareholder value. Applications will drive community value as the most sustainable shared service. And most importantly, applications will attract the best talent, representing the most interesting and rewarding of work.
Part 6 discusses how will 5G impact on the data centre and the wider IT environment
Michael Winterson, MD, Equinix Services, comments:
5G will require a move away from the traditional, centralized architecture model that redirects traffic from the user to a far-away corporate data centre. One of 5Gs main offerings to its users is its extremely low latency, but this is largely dependent on data centres being located near to either the user, or cell towers – otherwise requests will simply be slowed down. As a result, the 5G world will be reliant on high-density deployments at the edge. This is the only way to ensure the low-latency 5G promises.
Latency has always been an important consideration for data centres, but with the roll out of the new mobile network it will become an even more important one.
This means that 5G will have a massive impact on the evolution of emerging technologies, and this will be particularly true of edge computing. For instance, in order for 5G to be able to support applications such as AR and VR, with its increased bandwidth (up to 10Gb/s), deployments at the edge will be needed to send the data to local macro towers and small cells. This data can then be computed nearer to the user and sent back as close to real time as possible. This will significantly reduce latency,and eliminate lag on such applications.
5G will open up endless possibilities of digital transformation, especially in an IoT and AI connected world. To attain the high radio density required for 5G, network operators are looking to optimize costs through the use of open-source commodity networking hardware and the virtualization of the wireless networking stack. These efforts will pave the way for the Edge architecture to solve for “Cloud Radio Networks”, that power several radios through pools of virtualized network software.
Equinix anticipates investments in 2019 in the revamping of existing cellular network infrastructure and the building of new Edge infrastructure, as well as innovation in disaggregated cloud-based hardware (being referred to as "Mobile Edge Compute") for performance and cost optimization.
Jon Leppard, Director, Future Facilities sees a possible reversal of the data centre consolidation trend:
In recent years, the primary trend associated with data centre construction has been towards consolidation, leading to large-scale, hyperconverged facilities in geographically remote areas. However, the development and launch of 5G could be set to reverse this.
The forthcoming mobile technology will result in much larger volumes of data being created, transferred and stored in real-time. This means facilities will need to be positioned in much closer proximity to end users, in order to avoid latency and efficiency issues for data-intensive activities, such as playing computer games or live-streaming video.
The typical latency of today’s infrastructure would cause nausea in a VR environment and likely be responsible for your drone delivery crashing to earth. And perhaps the most significant challenge this poses for data centre providers is sourcing and securing real estate for the thousands of micro-sites that will need to be created, primarily across densely packed urban and suburban locations.
As competition for any such available space is likely to increase, telecoms providers that can offer housing for edge data centres at the end of residential streets may find themselves to be the prime beneficiaries of this new trend. In fact, forward-looking data centre providers such as Vapor IO are already forming partnerships to convert mobile base stations into edge data centres. Going forward, organisations in other sectors that find themselves with a surplus of urban real estate could potentially offer it to data centre providers, leading to a “DC Airbnb” of sorts.
However, it is important to note that this increased demand for edge data centres will not necessarily render current facilities obsolete. In many cases the best solution will be a combination of the two. This will require a tailored approach as to whether compute is done at the edge or back at base. Providers will need to adapt their operating strategies accordingly, to incorporate a broader footprint of sites and ensure facilities are optimised to maximise the potential benefits that 5G can offer.
Kevin Hasley, Executive Director at IHS Markit, comments:
In the enterprise, 5G will enable faster and cheaper network upgrades when using fixed wireless access. In this case, carriers would use 5G connectivity to replace a physical connection (such as fiber). Several carriers are looking at using 5G to solve the costly management, maintenance, and upgrade challenges associated with the portion of the telecommunications service that physically reaches the enterprise's premises.
For data centers and IT, the effect of 5G will be felt much quicker than many predict. Current applications place strain on network infrastructure, but 5G will change how we build and enhance data centers in the future. In fact, IHS Markit views 5G as a catalyst that will thrust mobile technology into the exclusive realm of GPTs (General Purpose Technologies), a technology that has profoundly changed industries and economies.
Unlike its predecessors though, 5G is a technological paradigm shift, akin to the shift from typewriter to computer. And it isn’t just a network. 5G will become the underlying fabric of an entire ecosystem of fully connected intelligent sensors and devices, capable of overhauling economic and business policies, and further blurring geographical and cultural borders. It will be capable of delivering at every rung of the ecosystem’s ladder, and will provide seamless, continuous connectivity for business applications.
All industries and workers will feel the effects of the shift to 5G. In particular, automotive, health care, and the Internet of Things (IoT) are expected to bring about dramatic transformations in our daily lives. For example, think about the relationship between the smart city and an autonomous car. With a 5G connection, your car will know your ETA at work, taking the optimal route based on traffic data communicated from other cars and the roadways. While a handful of companies are working on this level of automation, the ability to deliver this type of functionality at scale will require the marriage of intelligent devices and the 5G network.
R. Ezhirpavai, Vice President for Technology at Aricent, makes the following observations:
1. What are the benefits of 5G to the work place?
The main benefits will be coverage and bandwidth for data intensive applications. Employees should of course notice faster download speeds over their 5G handsets. In terms of connected devices (IoT) etc., there will be a range of applications with different bandwidths for connectivity. That’s because 5G technology has services over (mainly) 3 different types of bands: low band: less than 1 GHz spectrum. Mid band: 1 GHz to 6 Hz spectrum. High band: called millimeter wave and is greater than 24 GHz. This is important to note because 5G enterprise applications and services will be offered by various operators in all these 3 bands – and they have different coverage implications for the workplace.
China, India and some European countries will have a Mid Band service. In the US and Canada, it will be a mix of low band and high band. Low band is for extending a coverage area and a high band is to offer capacity. High band struggles to penetrate buildings and a number of operators are looking at using small cells for enterprises. For enterprises, 5G requires higher bandwidth. Operators are planning to install small cells in high band with massive MIMO antennas (multiple input, multiple output) to minimize interference. This will not only provide high bandwidth for enterprise users but also better coverage.
Currently, enterprises commonly use WiFi for their broadband. Since Wi-Fi is unlicensed, there is signal degradation and even security risks. In order to provide reliable, secure and cost effective broadband connectivity, 5G is being considered by a number of large enterprises to be delivered over CBRS (Citizens Broadband Radio Service) band.
5G offers higher uplink data throughput than 4G. What this means for the workplace is that 5G can be used in applications that require high quality images. For example, to monitor a production line, images are captured and sent over 5G to a private network for video analytics with AI to verify the quality of the production line. 5G can also be used for surveillance cameras.
2. How will 5G impact the IT/data centre environment (ie more/different traffic etc.)?
The telecoms body, 3GPP, has defined 5G architecture as a service-based system and all core network elements can be deployed as microservices for “Network Slicing”. This makes it ideal for enterprises who need their own unique flavor of 5G for their business applications, such as connected fleets, IoT sensors etc. Multiple slices have different resource requirements in IT and data centers. Data centers play a critical role allowing slices to be scaled up and down based on capacity. Data can flow through Software Defined Networks (SDN) to optimally use the network resources. This allows easy scaling, adding new services, optimizing network resources, thus giving overall capex benefits to network operators – and ultimately, for the enterprise.
Schneider Electric’s Data Center Science Center recently calculated that typical data center physical infrastructure energy losses have been cut by 80% over the last 10 years. This has been enabled by improvements in UPS efficiencies, cooling technologies (e.g., economization) as well as cooling practices (e.g., air containment). Data centers are cheaper now, too, on a $/watt basis. The big question is how influential Artificial Intelligence and Machine Learning will be in continuing this trend of increased performance at lower cost.
By Patrick Donovan, Sr. Research Analyst Data Center Science Center IT Division, Schneider Electric.
Artificial Intelligence (AI) and Machine Learning (ML) are two terms often used interchangeably or considered to be synonyms. Simply put, AI refers to the concept that a machine or system can be “smart” in carrying out tasks and operations based on programming and the input of data about itself or its environment. ML is fundamentally the ability of a machine or system to automatically learn and improve its operation or functions without human input. ML could therefore be thought of as the current state-of-the-art for a machine with AI.
Much of today’s Data Center Physical Infrastructure equipment incorporates some form of AI. UPSs, cooling units, etc. have programmed firmware and algorithms that dictate how the equipment operates and behaves as conditions change. For example, cooling control systems actuate valves, fans, and pumps in a coordinated, logical way to achieve user-defined set points as environmental conditions change over time.
In addition, like most IoT infrastructure, power and cooling equipment is equipped with sensors. These devices collect a large amount of useful data about the machines and their environment. This information can be used to determine machine operations and its responses to emerging conditions and events. It can also be used by smart systems like Building Management Systems (BMS), Power Monitoring Systems (PMS), and Schneider Electric’s StruxureWare for Data centers™ Data Center Infrastructure Management (DCIM) software to extract useful insights about data center status and trending such as capacity, reliability, and efficiency.
ML in data centers is an exciting new concept which is currently being researched by manufacturers including Schneider Electric. The company believes that increasing the intelligence and automation of physical infrastructure equipment and management systems, and integrating it with the IT load, will serve to make data centers more reliable and efficient both in terms of energy use and operations.
Laying the foundations for this advance, Schneider Electric has recently made some big announcements including its EcoStruxure for Data Centers™ system architecture. The system comprises three levels: connected products, edge control software, and cloud-based apps/ analytics/ services. It leverages the benefits of IoT, cloud, and Big Data analytics to deliver unprecedented insight into data center operations, maximizing the value of data to deliver improved data center security, reliability, efficiency, and sustainability.
The difference is DMaaS
An important component in Schneider Electrics EcoStruxure offer has been the launch of Data Center Management as a Service (DMaaS). An integrated portfolio of both hardware and software solutions, DMaaS enables optimization of the IT layer by simplifying, monitoring, and servicing data center physical infrastructure from the edge to the enterprise. It utilizes cloud-based software for DCIM-like monitoring and information analysis, to offer real-time operational visibility, alarming and shortened resolution times.
Although DCIM tools have previously been made available on a Software-as-a-Service (Saas) basis, DMaaS differs from this model in a number of ways. DMaaS has simplified this process of implementing monitoring software throughout the facility. Once monitoring is underway, the service aggregates and analyses large sets of anonymized data directly from data center infrastructure equipment via a secure and encrypted connection. This data can be further enhanced using big data analytics with the primary goal of predicting and preventing data center failures, foreseeing service requirements and detecting capacity shortfalls.
This is useful for data center managers with resource limitations, because, according to a report carried out by 451 Research*, DMaaS “ties remote cloud-based monitoring into maintenance and fix services, enabling a full-service business model for suppliers.” It therefore opens a doorway allowing new and additional smart eyes on the infrastructure (from a service provider’s network operations center) to support a customer’s internal team. It also opens the way to the development of new offerings from service partners, from energy management to proactive maintenance. Again, for those with resource constraints, the ability to be able to have full insight into data center infrastructure and the IT load, enables intelligent support to be provided based on a data driven basis.
The greater breadth and depth of data that can now be captured from IoT enabled equipment increases the capability of DMaaS compared with earlier service models. The value of data is multiplied when it is aggregated and analysed at scale. By applying algorithms to large datasets drawn from diverse types of data centers operating in different environmental conditions, the goal of DMaaS will be to predict, for example, when equipment will fail, and when cooling thresholds will be breached. The larger the dataset, the smarter DMaaS becomes with every iteration.
The report from 451 Research goes on to say that having more data about the performance of specific equipment in specific environments (temperature, humidity, air pressure), will enable predictions to become more accurate over time. It predicts that in the not-too-distant future, increased data center automation will be made possible as well as full remote control as part of DMaaS-driven services, e.g., automatically switching UPS to eco-mode when utilization is low, directing IT load away from areas of potential failure, and power capping and shedding.
In other markets, the emergence of IoT technology and use of big data has also been the stimulus for the introduction of innovative business models. A potential capability of DMaaS is to enable service suppliers and manufacturers to bundle monitoring and management services into lease agreement for data center infrastructure equipment to deliver asset-as-a-service offerings. With this type of DMaaS enabled service, the supplier maintains ownership and charges for operation service. 451 Research believes that this might be especially interesting for highly distributed IT deployments and edge data center portfolios.
Right now, it’s important to say that AI is not going to solve all current data center challenges. It will not magically transform an old traditional data center into a cutting-edge site with a perfect PUE and availability record. The fundamentals and best practices of data center design and operation will still be crucial to success. However, the gains which it can bring through DMaaS are a good starting point, and we can expect that as future developments in AI and ML applied in the data center, they will build on or provide incremental value to these major performance improvements that were gained over the last decade.
IT plays a significant role in the success of most businesses, and more CIOs are citing digital transformation as their number one priority. In fact, over half (56%) of organisations today operate with a cloud-first mentality. This is good news for cloud service providers (CSPs) as IT spending on public cloud is set to grow from 12% in 2017 to 18% within the next two years, according to our Truth in Cloud study.
By Peter Grimmond, Head of Technology, EMEA, Veritas.
This trend is likely to continue as more organisations plan to increase the workloads they have across multiple cloud platforms. With a growing number of apps and services available, and the benefits cloud offers – agility, ease-of-use, time-to-provision – it is easy to see why cloud adoption is on the rise. And business leaders are realising the potential of using multiple cloud platforms: nearly six in ten businesses (58%) that currently use one cloud provider plan to expand their portfolio across multiple platforms.
Using multiple cloud providers enables businesses to benefit from provider-specific capabilities, to optimise costs and to improve resilience. But organisations must pay close attention to selecting CSPs that are right for their business and their specific IT requirements.
Many organisations have moved past choosing a cloud provider based primarily on cost. Rather, the areas that are of most importance to organisations when it comes to selecting a CSP include data privacy, security and compliance (60%), workload performance (49%) and workload resilience or uptime (43%).
It’s encouraging to see organisations building true multi-cloud strategies as they strive to achieve digital transformation. But as they increasingly distribute their data across multiple clouds to advance their cloud journey, it is critical that enterprises understand exactly who has the responsibility for their data in the cloud.
Know your responsibility
Worryingly, many organisations are making incorrect assumptions about the data management capabilities offered by their cloud service provider, leaving themselves exposed in multiple areas. The vast majority of businesses (83%) believe that their CSPs take care of data protection in the cloud. More than two-thirds (69%) expect their cloud provider to take full responsibility for data privacy and regulatory compliance of the data held on their platform, with three-quarters (75%) of businesses even saying they would leave their cloud provider as a result of data privacy non-compliance.
Yet cloud service provider contracts usually place data management responsibility onto the service user. Organisations need to understand clearly their retained responsibilities in order to avoid the risks of non-compliance, which can have massive implications on their business.
With the EU General Data Protection Regulation (GDPR) now in force, businesses can’t afford to mishandle their data, since they will be the ones who face the repercussions, regardless of whether their data is stored on their own private servers or hosted on a third-party cloud platform.
Our research also reveals dangerous misconceptions around the responsibility for cloud outages, with six in ten (60%) respondents admitting they have not fully evaluated the cost of a cloud outage to their business and are therefore ill-prepared to deal with one. A similar percentage (59%) also believe that dealing with cloud service interruptions is the primary responsibility of their CSP, while most (83%) believe that their CSP is responsible for ensuring that their workloads and data in the cloud are protected against outages.
Although cloud service providers offer service-level objectives on infrastructure availability, it is ultimately the service user’s responsibility to ensure that their critical business applications remain resilient in the event of an infrastructure outage and that data is protected against loss or corruption.
The information a company holds is its most valuable asset and businesses must have full visibility into their data and be accountable for protecting it, regardless of where it’s located. Not only will this will help avoid the risks of non-compliance or loss of revenue through downtime, but it can also help organisations to glean better insights from their data to improve customer experiences, manage costs, improve research and development, and build brand loyalty.
Journey to the cloud
Despite the challenges that a multi-cloud approach can bring, the opportunities it can provide are far greater.
Using multiple clouds enables businesses to increase resiliency and minimise any risk of downtime: if any of their cloud service providers suffer an outage, businesses can simply switch to another one of their platforms. Our research found that resilience and uptime is one of the most important aspects organisations consider when selecting a CSP. The ability to move workloads between platforms also allows businesses to pick and choose the right CSPs for specific job functions or workloads.
With more companies embracing a cloud-first mentality and introducing multi-cloud approaches into their organisations, the need to navigate the complexities of a multi-cloud world is critical. As with on-premises environments, companies should consider all aspects of data management as they journey to the cloud, from data protection, regulatory compliance and workload portability to business continuity and storage optimisation.
Cloud technology clearly provides significant benefits to businesses, but without strong data management and protection capabilities, organisations will struggle to reap the rewards from multi-cloud investments.
Part 7 discusses the next generation of data centre topology
Data is already in demand and the rollout of the new 5G network will precipitate a massive change in IT infrastructure requirements, according to Ted Pulfer, enterprise consultant at critical environment specialist Keysource.
The hype when 4G first launched was driven by a huge boom in consumer demand for faster mobile networks. At the time, social media had reached the masses and using your phone as a tool for entertainment was becoming increasingly popular. The sheer volume of devices streaming data created a need for faster mobile networks and I think it’s fair to say most people could see the benefit of having faster connection speeds at the time.
The next generation, 5G, is a little different. 4G is well established and speeds of 50 M/bit/s work for most people. No doubt there will be people who ask whether a new cellular roll-out is really that important.
However, as with many leaps in technology, people don’t necessarily realise whether or not they need something until they experience it. A superfast mobile network could be a catalyst for significant change in how people access data and information – both at work and at home – through mainstream adoption of IoT, AI and VR. The revision of the topology of data centres needs to happen at the same time and we need to get serious about edge-computing before 5G outpaces the industry. To do that we need to resolve some underlying challenges the data centre sector has faced for years.
In October Vodafone became the first British network to offer UK-wide 5G functionality after it won the largest chunk of the 5G spectrum at this year’s auction. The trial is seeing the telecoms giant stream mobile data traffic to and from the internet exclusively over 5G from a site in Salford, Greater Manchester, which is connected to Vodafone’s nationwide converged fibre network. Other providers are also in testing phases in smaller locations and, with Vodafone’s push, they won’t be far behind in their own roll-outs. By 2019, 5G-enabled handsets are expected to be on the market after which point 4G will feel sluggish by comparison.
Vodafone has already conducted the UK’s first holographic call using 5G spectrum from its Manchester office with Steph Houghton, England and Manchester City women’s football captain. When 5G is all around us, consumers will be able to play 4K resolution games on the go with little to no lag and augmented reality will become even more seamless on mobile.
Now, of course, the things that 5G promises can’t be simply streamed from a single cloud source when you factor in how many devices will be accessing data at the same time. And I don’t just mean smartphones. Potentially dangerous latency issues for driverless cars, for example, can only be resolved through a change to the data centre landscape. In fact, it’s only through an overhaul of the cloud model that the industry will be able to carve out a route for businesses and consumers to enjoy 5G.
Edge-of-network computing, where data centres themselves are multiplied and distributed to act as a localised portal for connectivity, will be the next trend in cloud. Data storage is already shifting away from individual devices almost entirely and cloud infrastructures as we know them are becoming vessels for data alone – given the increasing amount of data processed daily.
Edge computing is the necessary element in the middle. In simple terms it’s a series of data centres distributed across a broad geography that can bring applications physically closer to users to reduce lag. Packets of data are processed locally and it’s only anomalies that are sent to the cloud. The arrival of 5G will make this whole process quicker, especially in remote locations.
Many people accept this coming boom in edge estates. Yet, questions over the delivery have so far hindered edge deployments.
The competitive edge
The difficulty in deploying an edge-of-network infrastructure is that it’s an entirely different beast to traditional cloud structures.
One of the major factors to be considered is security. Cyber-attacks are increasing in both scale and frequency, while problems originating from physical infrastructure have also been found to be to blame for significant outages. According to the European Commission, cyber-crime is costing €265 billion a year and some experts have predicted that edge computing potentially represents a weak point for cyber security. These concerns will rightly mean that clients will expect data centre operators to be investing heavily in security and disaster recovery processes as well as the physical security and maintenance of these localised data centres.
The availability of utility services will also be a pressure point. As we’ve seen data centre requirements grow around the world, it’s a very real concern that energy demands will outgrow available supply soon. With distributed computing we’re talking about thousands of servers spread over a huge geographic footprint. Having to deploy multiple smaller data centres in this way will undoubtedly cause further strains on current power networks.
As a result, edge deployment will need to be designed with power in mind and include waste heat or localised renewable power generation.
The major question that clients will want answers to is how these data centres will be serviced with the above in mind. It’s a good question, and one that many data centre consultants and FM providers are eager to evade. The reality is that this is the major change that the entire sector will need to get to grips with. The industry has a skills-shortage that’s already causing challenges across the sector and the emerging data centre model will need greater on-the-ground maintenance over a larger geographic area. We’re talking about work volumes that aren’t currently achievable.
A long-term solution is to encourage new blood to enter the industry. Keysource, like others, is drawing in new talent through our graduate and apprenticeship initiatives. But this will take time and so smarter solutions are required. Unmanned, or “dark” facilities are one potential avenue. This is where AI and machine learning support in the management and maintenance of data centre estates, reducing the skill level for maintenance locally and reducing costs for clients too – helping them stay competitive.
5G is rapidly approaching and distributed computing landscapes have been widely accepted as the best data centre format to support it. We need to get beyond just accepting this and start acting now for how it is going to be delivered. It’s an exciting development with huge opportunities but we’ll risk delivering edge too quickly and too expensively without preparing our approach to power, security and service now. As a result, edge delivery is something Keysource is actively pursuing – modelling various delivery options to help clients embark on the process sooner rather than later.
With a kilometre of exhibition space, presentation halls and meeting rooms under one roof, The Convention Centre Dublin (The CCD) is an enormous space to monitor. The venue can host events ranging from 5 to 5,500 delegates and keeping visitors connected during an event is no easy task. Thanks to Paessler’s unified monitoring solution, PRTG Network Monitor, The CCD is not only able to ensure that those visiting an event have the best possible experience, but also that their network equipment, IoT devices and laptops are in perfect working order. Providing event organisers with a predictive analysis of their connectivity needs and giving visitors the best possible conferencing experience.
Giving visitors a complete brand experience
Visitors to The CCD rely on the venue’s information screens for event details. A failure of any of the 84 screen panels or media players can cause confusion, as well as appear highly unprofessional to visitors. Monitoring the entire floor space in person would be costly and impractical, but their PRTG system allows the team at The CCD to easily monitor the performance of the information displays, and other IoT devices, centrally and in real time.
Optimising employee expertise
Walking the total space of The CCD would be the equivalent of crossing ten football pitches. There is a more effective use of time than checking in on each of the venue’s 22 rooms to make sure laptops, video monitors and other equipment are all operating correctly. PRTG provides the conference centre with a complete monitoring solution, which alerts support staff to issues with critical hardware and software failures before they occur.
All the conference centre’s essential systems are monitored 24/7, including multiple VMware hosts and a NetApp storage area network (SAN). Every PRTG license includes pre-defined sensors for all these devices, and many more, without needing to buy any add-ons or plugins.
Embracing IoT allows The CCD to provide an unmatched offering
The CCD recently upgraded their PRTG licence to add more sensors, allowing the monitoring of every “Thing” else in the building. Our ideal scenario would be that all IoT devices added to the network can be accurately monitored, including security cameras, smart lighting in the exhibition halls, laptop performance and even the ink levels in the business centre printers. As a result, The CCD teams are warned well in advance of any potential issues to ensure they provide seamless service to conference organisers at all times.
Uptime is vital to the visitor experience and monitoring is the only way to ensure any potential issues can be properly managed and corrected before a client becomes aware of a problem.
Predictive usage data provides a competitive edge
Event organisers often complain that conference venues provide their visitors with unsatisfactory or unstable Wi-Fi connections during an event. This is usually due to venues being underprepared for the amount of network traffic generated during a conference or exhibition. They often also lack data to backup their bandwidth requirement recommendations to event organisers.
The CCD uses PRTG to provide accurate usage statistics, giving clients a clear, upfront picture of exactly what their Wi-Fi requirements will look like. Drawing on previous anonymised client usage from PRTG’s historical data, they can match any new client’s profile with predictive data and tailor quotes specifically to the type of visitor that each event will attract. This includes the likely download and upload usage patterns; as well as providing on-the-day, real time analysis of usage, allowing limits to be adjusted, when necessary.
Since connectivity is aligned to the revenue generation of a convention centre it is vital that they get this right. Providing too low an estimate could result in poor Wi-Fi performance for visitors. Conversely, providing expensive connectivity quotes, without accurate data usage analysis could immediately scare away conference organisers.
Having the data to back up the connection experience that The CCD offers provides complete transparency to clients before, during, and after an event resulting in stronger client relationships and true transparency.
Safeguarding hardware and eliminating replacement costs
The CCD’s 10 communications rooms are closely monitored at all times. Using a combination of PRTG’s Cisco specific, and generic SNMP sensors, the team can be assured that all their switches, routers, firewalls and telephony systems are always available, and performing optimally. PRTG is also used to monitor the underlying infrastructure on which the IT systems depend. In one instance, the cooling fan in one of the communications rooms failed. This might have taken several days to notice if it wasn’t for PRTG’s early warning system.
Growing Dublin’s reputation as the leading technology capital of Europe
Maintaining a world class conference venue is no easy feat and with many European destinations vying for the attention of global conference organisers, maintaining a competitive edge is vital. Add to this the fact that Dublin is fast becoming a world-renowned technology hub, it is essential that it has a convention centre to match.
Paessler’s monitoring solution is key to ensuring that The CCD’s reputation and success as a leading conference centre is upheld.
Our greatest consideration at The CCD is up-time. I have experience with PRTG for close on 10 years now. This is the third place I've installed it and it’s always the first new software I install. It’s simple to understand and easy to use. Like all monitoring software it needs to be tuned on an ongoing basis, but it can be tuned so well. And because it's programmable, whatever box you have, whatever information you have, you can tell it to take that information and create a complete picture.Craig Colley, Head of ICT at The Convention Centre Dublin
From small businesses to global conglomerates, digital transformation is taking place across all sectors and sizes of organisations. It is one of the key decisions that business decision makers find themselves faced with. Studies have found that 96% of companies consider it important or critical to their development, whilst MIT Centre for Business discovered that digital transformation can have an enormously positive effect, with 26% of businesses investing in innovation being more profitable than their average industry competitors.
By Ilijana Vavan, Kaspersky Lab’s managing director for Europe.
But in a bid to embrace and elevate IT systems and services, security considerations are at risk of being downgraded.
For example, despite a huge media and regulatory focus on the security of personal information and company-held data, the security of data travelling across and stored in the cloud can be an afterthought. But with digital transformation projects often relying on the use of cloud-enabled infrastructure and services, the risks of not securing data often outweigh the rewards.
In fact, cloud-related IT security incidents are not uncommon and are among the costliest for businesses to recover from. Kaspersky Lab research[i] shows that incidents affecting IT infrastructure hosted by a third party cost £1.2 million for enterprises and £90,000 for SMBs. Instead of benefitting companies, digital transformation strategies could in fact be leaving them exposed and vulnerable. According to Forrester, one-in-three (31%) IT decision-makers are already worried about the security aspects of digital transformation.
A data breach or IT security incident could impact transformation strategies and, in turn, business innovation and growth. To ensure a secure and successful approach, businesses need to put cybersecurity front of mind when looking at the areas of planning, processes and people.
Although digital transformation strategies vary according to business plans, Aberdeen Group has identified the three key digitalisation technologies that have the greatest potential to impact operations: IoT, due to its ability to provide operational intelligence; the cloud, for its scalability; and big data analytics, which can transform data into predictive and actionable insights.
Digital transformation often involves the need to operate with growing IT infrastructure. Cloud environments provide the necessary scalability and embracing IoT involves connecting new devices across production lines, factory floors or workspaces every day, then analysing the data they produce.
But businesses can lack visibility and accountability of their data when taking this approach. This puts information at risk of compromise or even encryption, from threats such as the Zepto ransomware, which spreads via cloud storage apps. Planning to avoid these issues is key to digital transformation security.
Embracing digital transformation strategies involves facilitating the movement and sharing of data, meaning that cybersecurity needs to be built into any data processes from the start, if data is to be secured.
Nine-in-ten businesses are now using cloud computing in some shape or form, to improve cost efficiencies and grow their infrastructure according to demand. While this means that businesses are becoming more agile, it is also impacting the transparency of data exposure. Data ‘on the go’ (including data that’s held and processed in cloud environments or in third-party IT infrastructure), is presenting businesses with new security issues, and, as a result, new costs.
According to our research, the most expensive cybersecurity incidents over the past year have been related to cloud environments and data protection. For SMBs, two-in-three of the most expensive cybersecurity incidents are related to the cloud.
Processing data ‘on the go’ is an inevitable part of digital transformation. However, the high costs of associated security incidents could pose a threat to future digital transformation strategies. The key is to build security into data processing at every step; whether that’s through service level agreements (SLAs) with third-party providers, or by selecting cloud services with suitable encryption and data recovery mechanisms.
With cybersecurity incidents being far-reaching and costly, the boardroom is increasingly taking part in the cybersecurity-provisioning debate, as part of a wider discussion about digital transformation.
Our research suggests that C-level executives now feel they have a personal and professional stake in the changes being made. Enterprises are now spending almost a third of their IT budget (£6.8m) on cybersecurity, demonstrating the importance top management is now willing to attribute to security.
In addition, IT security budgets are expected to rise over the next three years across all segments - very small businesses and enterprises alike predict they will spend up to 15% more on cybersecurity in this time, while SMBs predict a similar 14% increase.
CEOs are recognised as the driving force behind digital transformation change. They are ramping up their playbooks, changing their mindsets, and recruiting for roles such as Chief Digital Officer, with the authority and budget to make things happen.
What’s abundantly clear is that the best chance of digital transformation success is when data processes are secure and vulnerabilities protected against. But for that, companies need a reliable partner that can provide the best technical solutions that are also flexible enough to adapt according to each and every business need.
[i] IT Security Economics Report 2018: The Financial Impact of Digital Transformation: Elevating IT Security to the Boardroom, Kaspersky Lab (2018)