‘Tis the season of predictions and forecasts – reasons aplenty to be jolly as seemingly everyone has a go at crystal ball gazing for 2019 and beyond. In fairness, I did invite vendors of all shapes and sizes to send through their thoughts to me for this issue of Digitalisation World as my original choice of subject – the edge – was met with a surprising almost total silence. Everyone seems to be talking about edge, but, apparently, no one wants to commit these words to computer screen. As for the predictions and forecasts well, I’ve received so many that it looks as if we’re going to have a Part 1 (in this issue), a Part 2 in the January issue and even a Part 3 in the February issue, if that’s not too late!
Of course, it would be quite wrong of me to spoil the excitement by revealing what everyone is talking about in terms of likely trends and technology developments for the coming year, but let’s just say that there are no real surprises as to the favourite topics: security, Cloud, AI and plenty of other intelligent automation suggestions dominate. However, it must be said that the sheer breadth and depth of the responses to my request for information is mighty impressive and I’d be surprised if readers don’t learn a thing or two, or at least are given pause for a thought or two.
As for myself, I’m always learning new things about data centres and the wider IT environment. Most recently, I had cause to question my (and everyone else’s?) unswerving belief in the edge as a major ongoing trend. Let me explain.
I do not mean that the edge will not be important, but the definition of the edge, indeed the very word ‘edge’ seems to me a tad misleading. When most people talk about edge they do so in terms of either moving IT infrastructure closer to where data is generated or closer to where data is consumed. Now, this can be in remote, edge locations but, equally, this can be in cities and towns, which have a high concentration of, say, retail businesses, all of whom want to use real-time applications to interact with their customers, and hence need a local data centre and not one some distance away. So, rather than edge data centres, we should be talking about local data centres, but that’s not much use for the marketing folks – then again somebody came up with the Internet of Things – does Local Infrastructure Deployment (LID) sound any worse?!
So, that’s my, not entirely original, thought for you ahead of the festive period – with the not very original slogan: Keep a LID on IT!
Merry Christmas and a Happy New Year to you all.
Worldwide spending on the technologies and services that enable the digital transformation (DX) of business practices, products, and organizations is forecast to reach $1.97 trillion in 2022, according to a new update to the International Data Corporation (IDC) Worldwide Semiannual Digital Transformation Spending Guide. DX spending is expected to steadily expand throughout the 2017-2022 forecast period, achieving a five-year compound annual growth rate of 16.7%.
"IDC predicts that, by 2020, 30% of G2000 companies will have allocated capital budget equal to at least 10% of revenue to fuel their digital strategies," said Shawn Fitzgerald, research director, Worldwide Digital Transformation Strategies. "This shift toward capital funding is an important one as business executives come to recognize digital transformation as a long-term investment. This commitment to funding DX will continue to drive spending well into the next decade."
Four industries will be responsible for nearly half of the $1.25 trillion in worldwide DX spending in 2019: discrete manufacturing ($220 billion), process manufacturing ($135 billion), transportation ($116 billion), and retail ($98 billion). For the discrete and process manufacturing industries, the top DX spending priority is smart manufacturing. IDC expects the two industries to invest more than $167 billion in smart manufacturing next year along with significant investments in digital innovation ($46 billion) and digital supply chain optimization ($29 billion). In the transportation industry, the leading strategic priority is digital supply chain optimization, which translates to nearly $65 billion in spending for freight management and intelligent scheduling. Meanwhile, the top priority for the retail industry is omni-channel commerce, which will drive investments of more than $27 billion in omni-channel commerce platforms, augmented virtual experience, in-store contextualized marketing, and next-generation payments.
The DX use cases – discretely funded efforts that support a program objective – that will see the largest investment across all industries in 2019 will be freight management ($60 billion), autonomic operations ($54 billion), robotic manufacturing ($46 billion), and intelligent and predictive grid management for electricity, gas, and water ($45 billion). Other use cases that will see investments in excess of $20 billion in 2019 include root cause, self-healing assets and automated maintenance, and quality and compliance.
"Industry spending on DX technologies is being driven by core innovation accelerator technologies with IoT and cognitive computing leading the race in terms of overall spend," said Eileen Smith, program director with IDC's Customer Insights and Analysis Group. "The introduction of IoT sensors and communications capabilities is rapidly transforming manufacturing processes as well as asset and inventory management across a wide range of industries. Similarly, artificial intelligence and machine learning are dramatically changing the way businesses interact with data and enabling fundamental changes in business processes."
"The unprecedented speed at which technologies are coming to market supporting DX strategies can only be described as frantic," said Craig Simpson, research manager with IDC's Customer Insights and Analysis Group. "Areas regarded as pilot projects just a year ago have already become mature operations in some industries."
From a technology perspective, hardware and services spending will account for more than 75% of all DX spending in 2019. Services spending will be led by IT services ($152 billion) and connectivity services ($147 billion) while business services will experience the fastest growth (29.0% CAGR) over the five-year forecast period. Hardware spending will be spread across a number of categories, including enterprise hardware, personal devices, and IaaS infrastructure. DX-related software spending will total $288 billion in 2019 and will be the fastest growing technology category with a CAGR of 18.8%.
The United States and China will be the two largest geographic markets for DX spending, delivering more than half the worldwide total in 2019. In the U.S., the leading industries will be discrete manufacturing ($63 billion), transportation ($40 billion), and professional services ($37 billion) with DX spending focused on IT services, applications, and connectivity services. In China, the industries spending the most on DX will be discrete manufacturing ($60 billion), process manufacturing ($35 billion), and utilities ($27 billion). Connectivity services and enterprise hardware will be the largest technology categories in China.
Global spending on robotic process automation (RPA) software is estimated to reach $680 million in 2018, an increase of 57 percent year over year, according to the latest research from Gartner, Inc. RPA software spending is on pace to total $2.4 billion in 2022.
“End-user organizations adopt RPA technology as a quick and easy fix to automate manual tasks,” said Cathy Tornbohm, vice president at Gartner. “Some employees will continue to execute mundane tasks that require them to cut, paste and change data manually. But when RPA tools perform those activities, the error-margin shrinks and data quality increases.”
The biggest adopters of RPA has include banks, insurance companies, utilities and telecommunications companies. “Typically, these organizations struggle to knit together the different elements of their accounting and HR systems, and are turning to RPA solutions to automate an existing manual task or process, or automate the functionality of legacy systems,” said Ms. Tornbohm.
RPA tools mimic the “manual” path a human worker would take to complete a task, using a combination of user interface interaction describer technologies. The market provides a broad range of solutions with tools either operating on individual desktops or enterprise servers.
Gartner estimates that 60 percent of organizations with a revenue of more than $1 billion will have deployed RPA tools by the end of the year. By the end of 2022, 85 percent of large and very large organizations will have deployed some form of RPA. “The growth in adoption will be driven by average RPA prices decreasing by approximately 10 percent to 15 percent by 2019, but also because organizations expect to achieve better business outcomes with the technology, such as reduced costs, increased accuracy and improved compliance,” added Ms. Tornbohm.
However, RPA is not a one-size-fits-all technology and there are cases where alternative automation solutions achieve better results. RPA solutions perform best when an organization needs structured data to automate existing tasks or processes, add automated functionality to legacy systems and link to external systems that can’t be connected through other IT options.
RPA Is on Its Way to Mainstream
RPA tools currently reside at the Peak of Inflated Expectations in the Gartner Hype Cycle for Artificial Intelligence, 2018, as organizations look for ways to cut costs, link legacy applications and achieve a high ROI. However, the potential to achieve a strong ROI fully depends on whether RPA fits the individual organization’s needs. “In the near-term future, we expect to see an expanding set of RPA vendors as well as a growing interest from software vendors, which include software testing vendors and business process management vendors that are looking to gain revenue from this set of functionality, said Ms. Tornbohm.”
In addition, another market movement is emerging — the integration of artificial intelligence (AI) functionalities into the product suite. This is happening because RPA providers add or integrate machine learning and AI technology to deliver more types of automation.
Evaluate First Before Any RPA Deployment Project
In order to make an RPA project a success, leaders must first evaluate the possible use cases for RPA in their organization and also focus on revenue-generating activities. “Do not just focus on RPA to reduce labor costs,” Ms. Tornbohm said. “Set clear expectations of what the tools can do and how your organization can use them to support digital transformation as part of an automation strategy.”
The next step is to identify quick wins for RPA. These can be tasks that require people to solely move data between systems or involve structured, digitalized data processed by predefined rules. While those are the use cases where RPA delivers a high ROI, it is important to consider alternative existing tools and services, which already provide a significant proportion of the required functionalities at a suitable price point. Those alternatives can be used in parallel with RPA, or as a hybrid solution. When choosing a vendor, also ask for future AI-based options.
IT spending in EMEA is projected to total $973 billion in 2019, an increase of 2 percent from the estimated spending of $954 billion in 2018, according to the latest forecast by Gartner, Inc.
“2018 is not a good year for IT spending in EMEA,” said John Lovelock, research vice president at Gartner. The 5.8 percent growth witnessed in 2018 includes a 4 percent currency tailwind driven by the Euro’s increase in value against the U.S. Dollar.”
IT spending in EMEA has been stuck and will remain stuck until the unknowns surrounding Brexit are resolved,” Mr. Lovelock added. Until then, the slow growth of the overall EMEA market is masking the divergent growth rates across the segments in the region (see Table 1).
Table 1. EMEA IT Spending Forecast (Millions of U.S. Dollars)
2018 Growth (%)
2019 Growth (%)
Data Center Systems
Source: Gartner (November 2018)
Spending on devices (PCs, tablets and mobile phones) in EMEA is set to decline in 2019. Consumer PC spending declined 9.1 percent in 2018 and demand for business Windows 10 PCs will pass its peak in 2019, with business PCs’ unit growth at 1 percent. Similarly, mobile phones’ unit growth, especially in Western Europe, will start to decline from a 4.7 percent growth in 2018 to -1.1 percent in 2019 as replacements cycles peak and then fall in 2019.
After achieving growth in 2018, spending on data center systems is set to be flat or decline in 2019 and beyond. The brief uptick in spending caused by a bump in upgrade spending and early replacements as a precaution against CPU security issues has abated.
The largest single market — communications services — has become commoditized and is set to show flat growth in 2019. The enterprise software market continues to have a positive effect on the overall spending growth in EMEA. This is largely due to the increasing availability and acceptance of cloud software.
In 2019, Gartner expects cloud, security and the move to digital business to bolster growth in EMEA. End-user spending on public cloud services in EMEA will grow 15 percent in 2019 to total $38.5 billion. In terms of security, with the GDPR in place, penalties for data violations could be as high as 4 percent of revenue.
“The enforcement of GDPR has moved security to a board-level priority. Organizations that are not protecting their customers’ privacy are not protecting their brand,” said Mr. Lovelock. “Global spending on IT security will surpass $133 billion in 2019 and in EMEA it will reach $40 billion in 2019, up 7.8 percent from 2018.”
Brexit Is Slowing Down IT Spending Growth in EMEA
With an expected 2 percent growth in IT spending in 2019, this ranks EMEA as the third slowest growing region for IT spending, ahead of Eurasia (+0.5 percent) and Latin America (+1.7 percent). Brexit is having a dampening effect on IT spending across the region. IT spending in the U.K. is set to total $204 billion in 2019, a 1.9 percent decline from 2018. “The U.K. is not expected to exhibit growth above 2 percent until 2020, which is having a downward effect on the EMEA IT spending average throughout the forecast period,” said Mr. Lovelock.
Israel and Saudi Arabia are leading the overall IT spending growth rate in EMEA in 2019, with each country set to achieve a 5.3 and 4.2 percent increase in IT spending, respectively. Both countries are investing in building a robust IT sector and making the journey to digital business. Israel’s growth is fueled by software spending and the increased use of software as a service. Saudi Arabia’s growth is driven by spend on IT services, including cloud computing and storage.
Secureworks has released the findings of its State of Cybercrime Report 2018 to illuminate the cybercrime trends and events that shaped the year.
From July 2017 through June 2018, Secureworks Counter Threat Unit® (CTU®) researchers analysed incident response outcomes and conducted original research to gain insight into threat activity and behaviour across 4,400 companies.
Among their findings was evidence that a small subset of professional criminal actors is responsible for the bulk of cybercrime-related damage, employing tools and techniques as sophisticated, targeted and insidious as most nation-state actors. These sophisticated and capable criminal gangs operate largely outside of the dark web, although they may leverage low-level criminal tools occasionally when it serves their purposes.
At the same time, there has been no lull in the overall volume of threats, and low-level cybercriminal activity remains a robust market economy, often taking place in view of security researchers and law enforcement on the dark web. While relatively simple in their approach, these activities can still deal widespread damage.
“Cybercrime is a lucrative industry, and it’s not surprising it’s become the arm of powerful, organised groups,” says Don Smith, Senior Director, Cyber Intelligence Cell, Secureworks Counter Threat Unit. “To understand the complete picture of the cybercriminal world, we developed insights based on a combination of dark web monitoring and client brand surveillance with automated technical tracking of cybercriminal toolsets.”
Among the CTU researchers’ key findings were the following:
The boundary between nation-state and cybercriminal actors continues to blur.
Ransomware continues to be a serious threat.
Sophisticated criminal gangs are earning millions of dollars of revenue through stolen payment card data.
The dark web is not the darkest depth of the cybercriminal world.
“The observations of CTU researchers over the last 12 months show that the threat from cybercrime is adaptive and constantly evolving,” the report concludes. “To stay ahead of it, it is imperative that organisations develop a holistic understanding of the landscape and how it relates to them, and tailor their security controls to address both opportunistic and more highly targeted cybercriminal threats.”
This growth reflects a relatively sanguine economic outlook during the first half of this year with accelerated digital transformation and, in some pockets, new digital services offsetting the cannibalization of traditional services.
During 1H18, it was a mixed picture for tier-one global outsourcers/integrators (companies with full service offerings and more than $10 billion in services revenue) headquartered in developed countries: most remained flat or declined slightly (organic growth in constant currency). But this was partially offset by stronger performances by two large global vendors, who returned to double-digit growth in the teens.
While most Indian services providers still outpace their U.S. and European counterparts, their growth (organic, in constant currency) scaled back slightly from a year ago, continuing their 2H17 deceleration. Growth paths continued to widen between vendors: while most large Indian vendors continued to grow at rates in the low single digits to high teens, it was offset by a few vendors' sharp slowdown. This is partially attributed to restructuring leadership teams and divesting business units to improve margins. It should also be noted that foreign exchange fluctuations in 2018 have complicated the constant currency calculations somewhat.
Looking at the different services markets, project-oriented revenues grew by 5.2% in 1H18 to $191 billion, followed by 3.6% growth for managed services and 2.7% for support services. The above-the-market growth in project-oriented markets was mostly led by business consulting and application development markets with growth rates of 7.5% and 6.5%, respectively. Most major management consulting firms still posted strong earnings in 2018, although growth rates cooled slightly: business consultants still extract more value in digital transformation. But the market movement belies enterprise buyers going from "thinking digital" to "doing digital." For example, the heavy lifting of digital is ultimately reflected in application projects, and the application development market showed faster growth in 2018 than both 1H17 and 2H17. As services vendors are making agile and cloud the central themes in their app businesses, it has helped them to shorten sales cycles and ramp up new app work.
In outsourcing, revenues grew 3.6% to $238 million in 1H18. Application-related managed services revenues (hosted and on-premise application management) outpaced infrastructure and business process outsourcing. Organizations rely largely on outsourcers to supply new app skills at scale. Large outsourcing contracts also served as the best vehicle to standardize and modernize existing application assets. Therefore, IDC expects application-related managed services markets to continue outgrowing other outsourcing markets in the coming years.
On the infrastructure side, while hosting infrastructure services revenue accelerated to 7.2% growth in 1H18, mostly due to cloud adoption, IT Outsourcing (ITO) – still almost twice as large a market and mostly big buyers and vendors – declined by 1.5%, largely chipped away by cloud cannibalization across all regions.
On a geographic basis, the United States, the largest services market, grew by 4.3%, slightly higher than the market rate, while Western Europe, the second largest market, grew only by 2.6%. In the United States, overall economic conditions and corporate spending remained robust. The effect of the trade war will also not be felt until the end of this year or in 2019; therefore, it had no negative impact on services spending in 1H18. In Western Europe, most major vendors are showing some softness in the region: US/European headquartered multinationals' recovery in Western Europe weakened in 2018. The newcomers' (namely Indian services providers) expansion into Europe also cooled slightly, with vendors posting mixed results in the region depending on their customer industry mix there. IDC expects Western European services revenues to be stable but structurally weaker than North America. IDC forecasts the region to grow below 3% annually in the coming years.
In emerging markets, Latin America, Asia/Pacific (excluding Japan) (APeJ), and Central & Eastern Europe led in growth. In Latin America, most major economies are turning the corner despite problems in Argentina and Venezuela. While big political/policy risks still exist, namely newly elected presidents in Mexico and Brazil, realistically, their effects will not be felt until 2019. Additionally, only a handful of regulatory issues will affect services outsourcers directly (i.e. tax and government procurement). On the other hand, Mexico, Canada, and the U.S. have resolved their NAFTA issues and Brazil has wrapped up its scandals. Therefore, IDC is expecting healthier IT spending in the region, especially with a strong deal pipeline in the public sector.
In APeJ, the second largest IT services market, Australia saw its growth scaled back slightly to 3.8% in 1H18, from 4.3% in 1H17. The largest market, China, trimmed its growth rate to just 7.2%, down from the 8% to 9% during the last two or three years, due to slower GDP growth, curbing debts, and pulling back on infrastructure spending, among other factors. Given its ongoing trade war with the U.S. and currency depreciation, we expect China's market growth will continue to flag, although gradually, in the coming years, which inevitably will have a spillover effect on Australia.
So far in 2018, the weaker growth in China and Australia was partially offset by faster growth from other emerging markets in APeJ (i.e. India, the Philippines, Indonesia, Vietnam, etc.). We expect this trend to continue: governments will continue to fund large digital initiatives and a better investment outlook will also drive IT spending (geopolitical and economic tension between China and the U.S. and its allies may help other emerging markets in the region attract foreign investments).
Global Regional Services 1H18 Revenue and Year-Over-Year Growth (revenues in $US billions)
Source: IDC Worldwide Semiannual Services Tracker 1H 2018
"Steady growth in the services market is being driven by continued demand for digital solutions across the regions," said Lisa Nagamine, research manager with IDC's Worldwide Semiannual Services Tracker. "But during 2018, as well as most of 2017, it is really the Americas and cloud-related services that are having the largest impact on revenue worldwide."
"For IT services, 2018 has so far been more stabilizing than it seems," said Xiao-Fei Zhang, program director, Global Services Markets and Trends. "Corporate America has been able to shake off geopolitical risks and trade tensions and continue to invest in new tools to reduce cost and add new capabilities."
DigiCert’s 2018 State of IoT survey reveals security as the top concern as IoT takes centre stage with 91 percent of companies saying it will be extremely important in the next two years.
A study from DigiCert reveals that enterprises have begun sustaining significant monetary losses stemming from the lack of good practices as they move forward with incorporating the Internet of Things (IoT) into their business models. In fact, among companies surveyed that are struggling the most with IoT security, 25 percent reported IoT security-related losses of nearly £257,333 in the last two years.
These findings come amid a ramping up of IoT focus within the typical organisation. Seventy-one percent of respondents indicated that IoT is extremely important to them currently, while 91 percent said they anticipate IoT to be extremely important to their respective organisations within two years.
The survey was conducted by ReRez Research in September 2018, with 700 enterprise organisations in the US, UK, Germany, France and Japan from across critical infrastructure industries.
Security and privacy topped the list of concerns for IoT projects, with 82 percent of respondents stating they were somewhat to extremely concerned about security challenges.
“Enterprises today fully grasp the reality that the Internet of Things is upon us and will continue to revolutionise the way we live, work and recreate,” said Mike Nelson, vice president of IoT Security at DigiCert. “Securing IoT devices is still a top priority that many enterprises are struggling to manage; however, integrating security at the beginning, and all the way through IoT implementations, is vital to mitigating rising attacks, which can be expected to continue. Due diligence when it comes to authentication, encryption and integrity of IoT devices and systems can help enterprises reliably and safely embrace IoT.”
Top vs. bottom performers
To give visibility to the specific challenges enterprises are encountering with IoT implementations, respondents were asked a series of questions using a wide variance of terminology. Using standard survey methodology, respondents’ answers were then scored and divided into three tiers:
IoT security missteps
Respondents were asked about IoT-related security incidents their organisations experienced within the past two years. The difference between the top- and bottom-tiers was unmistakable. Companies struggling the most with IoT implementation are much more likely to get hit with IoT-related security incidents. Every single bottom-tier enterprise experienced an IoT-related security incident in that time span, versus just 23 percent of the top-tier. The bottom-tier was also more likely to report problems in these specific areas:
These security incidents were not trivial. Among companies surveyed that are struggling the most with IoT security, 25 percent reported IoT security-related losses of nearly £257,333 in the last two years.
The top five areas for costs incurred within the past two years were:
Meanwhile, although the top-tier enterprises experienced some security missteps, an overwhelming majority reported no costs associated with those missteps. Top-tier enterprises attributed their security successes to these practices:
“When it comes to accelerating implementations of IoT, it’s vital for companies to strike a balance between gaining efficiencies and maintaining security and privacy,” Nelson said. “This study shows that enterprises that are implementing security best practices have less exposure to the risks and resulting damages from attacks on connected devices. Meanwhile, it appears these IoT security best practices, such as authentication and identity, encryption and integrity, are on the rise and companies are beginning to realise what’s at stake.”
The survey points to five best practices to help companies pursuing IoT realise the same success as the top-tier performing enterprises:
1.Review risk: Perform penetration testing to assess the risk of connected devices. Evaluate the risk and build a priority list for addressing primary security concerns, such as authentication and encryption. A strong risk assessment will help assure you do not leave any gaps in your connected security landscape.
2.Encrypt everything: As you evaluate use cases for your connected devices, make sure that all data is encrypted at rest and in transit. Make end-to-end encryption a product requirement to ensure this key security feature is implemented in all of your IoT projects.
3.Authenticate always: Review all of the connections being made to your device, including devices and users, to ensure authentication schemes only allow trusted connections to your IoT device. Using digital certificates helps to provide seamless authentication with binded identities that are tied to cryptographic protocols.
4.Instill integrity: Account for the basics of device and data integrity to include secure boot every time the device starts up, secure over the air updates, and the use of code signing to ensure the integrity of any code being run on the device.
5.Strategise for scale: Make sure that you have a scalable security framework and architecture ready to support your IoT deployments. Plan accordingly and work with third parties that have the scale and focus to help you reach your goals so that you can focus on your company’s core competency.
A new report from International Data Corporation (IDC) presents IDC's inaugural forecast for the worldwide 5G network infrastructure market for the period 2018–2022. It follows the release of IDC's initial forecasts for Telecom Virtual Network Functions (VNF) and Network Functions Virtualization Infrastructure (NFVI) in September and August 2018, respectively.
With the first instances of 5G services rolling out in the fourth quarter of 2018, 2019 is set to be a seminal year in the mobile industry. 5G handsets will begin to hit the market and end-users will be able to experience 5G technology firsthand.
From an infrastructure standpoint, the mobile industry continues to trial innovative solutions that leverage new spectrum, network virtualization, and machine learning and artificial intelligence (ML/AI) to create new value from existing network services. While these and other enhancements will play a critical role, 5G NR represents a key milestone in the next mobile generation, enabling faster speeds and enhanced capacity at lower cost per bit. Even as select cities begin to experience 5G NR today, the full breadth of 5G's potential will take several years to arrive, which will require additional standards work and trials, particularly related to a 5G NG core.
In addition to 5G NR and 5G NG core, procurement patterns indicate communications service providers (SPs) will need to invest in adjacent domains, including backhaul and NFVI, to support the continued push to cloud-native, software-led architectures.
Combined, IDC expects the total 5G and 5G-related network infrastructure market (5G RAN, 5G NG core, NFVI, routing and optical backhaul) to grow from approximately $528 million in 2018 to $26 billion in 2022 at a compound annual growth rate (CAGR) of 118%. IDC expects 5G RAN to be the largest market sub-segment through the forecast period, in line with prior mobile generations.
"Early 5G adopters are laying the groundwork for long-term success by investing in 5G RAN, NFVI, optical underlays, and next-generation routers and switches. Many are also in the process of experimenting with the 5G NG core. The long-term benefit of making these investments now will be when the standards-compliant SA 5G core is combined with a fully virtualized, cloud-ready RAN in the early 2020s. This development will enable many communications SPs to expand their value proposition and offer customized services across a diverse set of enterprise verticals through the use of network slicing," says Patrick Filkins, senior research analyst, IoT and Mobile Network Infrastructure.
Businesses are still a long way from realizing the benefits of much-hyped digital transformation initiatives, according to new research.
Organizations have high expectations for digital transformation, but new research from SnapLogic has uncovered that 40% of enterprises are either behind schedule with their digital transformation projects or haven’t started them yet.
Delayed projects aren’t the only factor keeping ITDMs up at night. The new research, conducted by Vanson Bourne, also revealed that 69% have had to reevaluate their digital transformation strategy entirely, and as a result, 59% would do it differently if given another chance. In fact, only 13% of ITDMs are completely confident they are on course to achieving their digital transformation goals.
When asked why they are struggling with digital transformation, 58% admitted that there is confusion amongst the organization around what they’re trying to achieve with digital transformation – an alarming result.
Digital Transformation Snags
Internal politics (34%), a lack of centralized ownership (22%), and a lack of senior management buy-in (17%) were identified as common roadblocks to digital transformation. In addition, 55% of ITDMs noted that a reliance on legacy technologies and/or a lack of the right technologies within their organization was holding them back, while 33% were stalled by a lack of the right skilled talent, and 31% reported that data silos were causing problems.
Additionally, 20% of organizations didn’t test or pilot their digital transformation projects before deploying them company-wide, and shockingly, 21% of ITDMs continued to roll out a company-wide digital transformation despite unsuccessful pilot programs in one part of the business.
Gaurav Dhillon, CEO at SnapLogic, commented on the findings: “Despite all the noise around digital transformation in recent years, it’s clear from our research that there’s still much work to be done to help organizations be successful. While some companies may take solace in knowing they are not the only ones struggling, for those of us in the technology industry this is a stark wake up call that we must do a better job advising, partnering with, and supporting customers in their digital transformation journey if we are to ever see the reality of a digital-first economy.”
Delivering on the Promise
Make no mistake: The promise of digital transformation is huge. For those about to undertake an enterprise-wide digital transformation, ITDMs expect to see the following results upon completion, on average: increased revenue of 13%, increased market share of 13%, reduced operating costs of 14%, increased business speed and agility of 16%, improved customer satisfaction of 18%, and reduced product development time of 15%.
But businesses are never going to achieve these gains unless they’re able to overcome key hurdles and turn the corner. For those who’ve completed a digital transformation, successful or otherwise, ITDMs identified the three most critical steps to success as investing in the right technologies and tools, involving all departments in strategy development, and investing in staff training.
The promise of new technologies in particular is seen as having a significant impact on digital transformation success. Case in point: 68% consider artificial intelligence and machine learning as vital to accelerating their digital transformation projects.
Dhillon concluded: “Digital transformation doesn’t happen overnight, and there’s no silver bullet for success. To succeed with digital transformation, organizations must first take the time to get the right strategy and plans in place, appoint senior-level leadership and ensure the whole of the organization is on-board and understands their respective roles, and embrace smart technology. In particular, enterprises must identify where they can put new AI or machine learning technologies to work; if done right, this will be a powerful accelerant to their digital transformation success.”
Bitglass has released its 2018 BYOD Security Report. The analysis is based on a survey of nearly 400 enterprise IT experts who revealed the state of BYOD and mobile device security in their organisations.
According to the study, 85 percent of organisations are embracing bring your own device (BYOD). Interestingly, many organisations are even allowing contractors, partners, customers, and suppliers to access corporate data on their personal devices. Amidst this BYOD frenzy, over half of the survey’s respondents believe that the volume of threats to mobile devices has increased over the past twelve months.
“While most companies believe mobile devices are being targeted more than ever, our findings indicate that many still lack the basic tools needed to secure data in BYOD environments,” said Rich Campagna, CMO of Bitglass. “Enterprises should feel empowered to take advantage of BYOD’s myriad benefits, but must employ comprehensive, real-time security if they want to do so safely and successfully.”
“The IoT will continue to deliver new opportunities for digital business innovation for the next decade, many of which will be enabled by new or improved technologies,” said Nick Jones, research vice president at Gartner. “CIOs who master innovative IoT trends have the opportunity to lead digital innovation in their business.”
In addition, CIOs should ensure they have the necessary skills and partners to support key emerging IoT trends and technologies, as, by 2023, the average CIO will be responsible for more than three times as many endpoints as this year.
Gartner has shortlisted the 10 most strategic IoT technologies and trends that will enable new revenue streams and business models, as well as new experiences and relationships:
Trend No. 1: Artificial Intelligence (AI)
Gartner forecasts that 14.2 billion connected things will be in use in 2019, and that the total will reach 25 billion by 2021, producing immense volume of data. “Data is the fuel that powers the IoT and the organization’s ability to derive meaning from it will define their long term success,” said Mr. Jones. “AI will be applied to a wide range of IoT information, including video, still images, speech, network traffic activity and sensor data.”
The technology landscape for AI is complex and will remain so through 2023, with many IT vendors investing heavily in AI, variants of AI coexisting, and new AI-based tolls and services emerging. Despite this complexity, it will be possible to achieve good results with AI in a wide range of IoT situations. As a result, CIOs must build an organization with the tools and skills to exploit AI in their IoT strategy.
Trend No. 2: Social, Legal and Ethical IoT
As the IoT matures and becomes more widely deployed, a wide range of social, legal and ethical issues will grow in importance. These include ownership of data and the deductions made from it; algorithmic bias; privacy; and compliance with regulations such as the General Data Protection Regulation.
“Successful deployment of an IoT solution demands that it’s not just technically effective but also socially acceptable,” said Mr. Jones. “CIOs must, therefore, educate themselves and their staff in this area, and consider forming groups, such as ethics councils, to review corporate strategy. CIOs should also consider having key algorithms and AI systems reviewed by external consultancies to identify potential bias.”
Trend No. 3: Infonomics and Data Broking
Last year’s Gartner survey of IoT projects showed 35 percent of respondents were selling or planning to sell data collected by their products and services. The theory of infonomics takes this monetization of data further by seeing it as a strategic business asset to be recorded in the company accounts. By 2023, the buying and selling of IoT data will become an essential part of many IoT systems. CIOs must educate their organizations on the risks and opportunities related to data broking in order to set the IT policies required in this area and to advise other parts of the organization.
Trend No. 4: The Shift from Intelligent Edge to Intelligent Mesh
The shift from centralized and cloud to edge architectures is well under way in the IoT space. However, this is not the end point because the neat set of layers associated with edge architecture will evolve to a more unstructured architecture comprising of a wide range of “things” and services connected in a dynamic mesh. These mesh architectures will enable more flexible, intelligent and responsive IoT systems — although often at the cost of additional complexities. CIOs must prepare for mesh architectures’ impact on IT infrastructure, skills and sourcing.
Trend No. 5: IoT Governance
As the IoT continues to expand, the need for a governance framework that ensures appropriate behavior in the creation, storage, use and deletion of information related to IoT projects will become increasingly important. Governance ranges from simple technical tasks such as device audits and firmware updates to more complex issues such as the control of devices and the usage of the information they generate. CIOs must take on the role of educating their organizations on governance issues and in some cases invest in staff and technologies to tackle governance.
Trend No. 6: Sensor Innovation
The sensor market will evolve continuously through 2023. New sensors will enable a wider range of situations and events to be detected, current sensors will fall in price to become more affordable or will be packaged in new ways to support new applications, and new algorithms will emerge to deduce more information from current sensor technologies. CIOs should ensure their teams are monitoring sensor innovations to identify those that might assist new opportunities and business innovation.
Trend No. 7: Trusted Hardware and Operating System
Gartner surveys invariably show that security is the most significant area of technical concern for organizations deploying IoT systems. This is because organizations often don’t have control over the source and nature of the software and hardware being utilised in IoT initiatives. “However, by 2023, we expect to see the deployment of hardware and software combinations that together create more trustworthy and secure IoT systems,” said Mr. Jones. “We advise CIOs to collaborate with chief information security officers to ensure the right staff are involved in reviewing any decisions that involve purchasing IoT devices and embedded operating systems.”
Trend 8: Novel IoT User Experiences
The IoT user experience (UX) covers a wide range of technologies and design techniques. It will be driven by four factors: new sensors, new algorithms, new experience architectures and context, and socially aware experiences. With an increasing number of interactions occurring with things that don’t have screens and keyboards, organizations’ UX designers will be required to use new technologies and adopt new perspectives if they want to create a superior UX that reduces friction, locks in users, and encourages usage and retention.
Trend No. 9: Silicon Chip Innovation
“Currently, most IoT endpoint devices use conventional processor chips, with low-power ARM architectures being particularly popular. However, traditional instruction sets and memory architectures aren’t well-suited to all the tasks that endpoints need to perform,” said Mr. Jones. “For example, the performance of deep neural networks (DNNs) is often limited by memory bandwidth, rather than processing power.”
By 2023, it’s expected that new special-purpose chips will reduce the power consumption required to run a DNN, enabling new edge architectures and embedded DNN functions in low-power IoT endpoints. This will support new capabilities such as data analytics integrated with sensors, and speech recognition included in low cost battery-powered devices. CIOs are advised to take note of this trend as silicon chips enabling functions such as embedded AI will in turn enable organizations to create highly innovative products and services.
Trend No. 10: New Wireless Networking Technologies for IoT
IoT networking involves balancing a set of competing requirements, such as endpoint cost, power consumption, bandwidth, latency, connection density, operating cost, quality of service, and range. No single networking technology optimizes all of these and new IoT networking technologies will provide CIOs with additional choice and flexibility. In particular they should explore 5G, the forthcoming generation of low earth orbit satellites, and backscatter networks.
The human touch and AI
CIOs need to prepare workers for a future in which people do more creative and impactful work because they no longer have to perform many routine and repetitive tasks, according to Gartner, Inc. People and machines are entering a new era of learning in which artificial intelligence (AI) augments ordinary intelligence and helps people realize their full potential.
“Pairing AI with a human creates a new decision-making model in which AI offers new facts and options, but the head remains human, as does the heart,” said Svetlana Sicular, research vice president at Gartner. “Let people identify the most suitable answers among the different ones that AI offers, and use the time freed by AI to create new things.”
Survey Finds AI Eliminates Fewer Jobs Than Expected — and Can Create Them
AI will change the workforce, but its impact will not be detrimental to all workers. According to a Gartner survey conducted in the first quarter of 2018 among 4,019 consumers in the U.K. and U.S., 77 percent of employees whose employers have yet to launch an AI initiative expect AI to eliminate jobs. But the same survey found that only 16 percent of employees whose employers have actually adopted AI technologies saw job reductions, and 26 percent of respondents reported job increases after adopting AI.
People will learn how to do less routine work. They will be trained in new tasks, while old tasks that have become routine will be done by machines. “The human is the strongest element in AI,” explained Ms. Sicular. “Newborns need an adult human to survive. AI is no different. The strongest component of AI is the human touch. AI serves human purposes and is developed by people.”
There are, for example, cases in which consulting an AI system has saved money, time and distress. Not consulting an AI system may actually become unethical in the future — in the field of medicine, for example. “The future lies not in a battle between people and machines, but in a synthesis of their capabilities, of humans and AI working together,” said Helen Poitevin, research senior director at Gartner.
People and Machines: A New Era of Learning
Although AI will give employees the time to do more, organizations will need to train and retrain their employees in anticipation of AI investments.
“We are entering an era of continuous learning,” said Ms. Sicular. “People will have to learn how to live with AI-enabled machines. When machines take away routine tasks, people will have the time to do more new tasks, so they will need to constantly learn.” People will also need to learn how to train AI systems to be useful, clear and trustworthy, in order to work alongside them cooperatively.
“It’s about trust and engagement. People need to trust machines — this is the ultimate condition of AI adoption and success,” said Ms. Sicular.
Choose Your AI Teacher Well
CIOs are likely to be the leader or instigator of AI initiatives in their organization. While machine learning focuses on creating new algorithms and improving the accuracy of “learners”, the machine teaching discipline focuses on the efficacy of “teachers.”
The role of teacher is to transfer knowledge to the learning machine, so that it can generate a useful model that can approximate a concept. Machine learning models and AI applications derive intelligence from the available data in ways that people direct them to do. “These technologies learn from teachers, so it is essential to choose your teachers well,” said Ms. Sicular. “While some may think AI is hard for people, we can also ask ourselves ‘Are people easy for AI?’”
“As a CIO, you will shape the future of work by how you invest in technology and people,” said Ms. Poitevin. “Today, the majority of the CIOs we speak to find themselves amid a proliferation of “bots” (physical robots and software virtual assistants). Adoption of bots in the workplace is rising as workers become increasingly comfortable working with machines and grow more supportive of them.”
CIOs believe bots will be integral to our daily lives, wherever we would ‘rather have a bot do it. “By 2020, AI technologies will pervade almost every new software product and service,” said Ms. Poitevin. “Well-designed robots and virtual assistants will be embraced and seen as helping employees focus on their best work by relieving them of a mountain of mundane work.”
IDC predicts direct digital transformation spending of $5.9 trillion over the years 2018 to 2021.
As the top digital transformation (DX) market research firm in the world, International Data Corporation (IDC) today unveiled IDC FutureScape: Worldwide Digital Transformation 2019 Predictions. In this year's DX predictions, IDC has identified two DX groups based on specific trends and attributes. Leaders in transformation (the digitally determined) are those organizations that have aligned the necessary elements of people, process, and technology for success. In contrast, laggards (the digitally distressed) have not developed the enterprise strategy necessary to align the organization effectively for transformation to date. IDC's market leading analytical understanding and insights are the result of extensive market research and survey data from over 3,000 companies worldwide.
IDC analysts Bob Parker and Shawn Fitzgerald recently discussed the ten industry predictions that will impact digital transformation efforts of CIOs and IT professionals over the next one to five years and offered guidance for managing the implications these predictions harbor for their IT investment priorities and implementation strategies.
The predictions from the IDC FutureScape for Worldwide Digital Transformation are:
"With direct digital transformation (DX) investment spending of $5.5 trillion over the years 2018 to 2021, this topic continues to be a central area of business leadership thinking," said Shawn Fitzgerald, research director, Worldwide Digital Transformation Strategies. "IDC's 2019 DX predictions represent our perspective on the major transformation trends we expect to see over the next five years. With almost 800 business use cases spanning 16 industries and eight functional areas, our DX spending guides illustrate where industry is both prioritizing digital investments and where we expect to see the largest growth in 3rd Platform and innovation accelerator technologies."
2018 saw numerous changes being made within the Data Centre Trade Association. The launch of a new website makes it easier for data centre owners and operators to find the information they require, and the new members portal has been designed to give a much better look and feel to the dedicated members pages.
The DCA - 2019
Having listened to our members and data centre community over the past 12 months our plans for 2019 are to continue to build on our identified areas of focus, provide independent advice and guidance to data centre owners, deliver more opportunities for members to engage, collaborate and gain value from their trade association.
The workshops planned for 2019 will continue to educate the sector on the latest best practices and to provide updates on new developments. DCA events provide great networking and collaboration opportunities. Our social media channels and monthly publications continue to deliver and expand, providing ‘thought leadership’ and knowledge sharing content opportunities for our members – again all designed to assist data centre owners and operators.
Providing members with independent representation remains a key deliverable for The DCA in 2019. The DCA will continue to provide members with both a voice and a seat at the table at National, EU and International levels, ensuring key issues are flagged and that the sectors voice is heard. Keeping one step ahead of policies is vital when it comes to steering the strategic direction for businesses. The DCA will continue to work closely with the EU and lobbying groups much as Policy Connect, via APPG’s to ensure that members concerns are heard.
An increased reliance on digital services in both our business and daily lives brings increased scrutiny. The word regulation seems to be cropping up in conversations more and more, I am an advocate of scrutiny however I still prefer self-regulation to imposed regulation.
The DCA will continue to encourage its members to lead by example through the endorsement and adoption of best practices, the EU Code of Conduct and independent validation through approved certification bodies.
The DCA will continue to support active members who are involved in Certification and Special Interest Groups (SIGs). These working group are run for and by DCA members under Chatham House Rules. Intrinsically linked to many of these SIGs, The DCA will continue to support EN/EU and ISO and Standards committees to ensure new and existing best practices, codes and standards remain both fit for purpose, accessible and affordable. Newly formed SIGs include a Workforce Development Group, Liquid Cooling Alliance, Open Compute Project (OCP) Working Group and Life Cycle Group.
The challenge of how to keep up with demand remains a daunting prospect for many businesses. Jobs need to be filled right across the sector especially when it comes to practical engineers, technicians, installers and operational/maintenance staff. Data Centres themselves may not employ a large workforce but the suppliers who support the sector do and the shortfall in available candidates to fill these roles is growing fast. The DCA recognise that to fill this gap we need to engage and attract a new generation of workforce at a much earlier stage in the educational cycle. In 2019 we will ramp up the work we are doing with STEM Learning to promote the DC Sector as a career of choice. Our aim is to recruit and support STEM Ambassadors from right across our membership to help in this effort.
In Summary, there is a great deal to look forward to in 2019 and as the year unfolds the Data Centre Trade Association will continue to be here to support you.
For more information and to contact The DCA (Data Centre Alliance) Trade Association
By Luca Rozzoni, Senior Product Manager for Europe, Chatsworth Products, Inc.
Computing at the edge is blurring the boundaries of the traditional data centre. The growth of the Internet of Things (IoT) means more data processing is being done on an ever-increasing range of smart devices in non-traditional locations such as manufacturing floors, warehouses and outdoors. As the network spreads, such critical factors as power management, cooling and physical security are taking on expanded roles – and even greater importance – in network operations. As we look to 2019 and beyond, we forecast a more a holistic approach to data centre operations, in which infrastructure, hardware and software are addressed as a unified system.
In Europe, we are seeing fast adoption and rapid growth of IoT across markets. Keeping pace with the increasing demands of edge and digital building initiatives next year will require support and protection of critical equipment, regardless of where it is located. Like the proverbial chain that is only as strong as its weakest link, any limitations in network design or performance will result in unacceptable quality of service. Playing a key role in maintaining network operations will be advanced rack and cable management solutions that can sustain the rigors of today’s technology demands.
Intelligent Power May Get More Traction
While rack densities continue to rise in Europe, they are still low compared with some other markets. An average data centre deploys 2-4 kilowatts (kW) per rack (in some isolated cases we see that being pushed to 10-15 kW). For that reason, a large portion of power deployments are conditioned by basic metered power distribution units (PDU). There has been a slow adoption of the new generation of intelligent PDU (monitored/switched), with price being the main stumbling block to greater market entry.
This situation should continue to evolve in 2019, because higher-density computing will require ever-more complex power distribution, monitoring and reporting. And even if rack densities do not increase significantly, compute power is increasing. As chip manufacturers add cores to processors (CPUs), Moore’s law will drive continued increases in computing per watt. Further, the size of the CPU package continues to increase, so the heat flux due to the CPU is decreasing. Basically, servers are more power efficient and support higher utilisation.
So rack densities will probably not climb significantly, but the amount of compute power (utilisation) per rack will. Further into the future, we will have to keep an eye on developments in artificial intelligence (AI). Typically, the PCI cards required to drive AI run at 100-percent power when models are being trained. When AI takes off, we expect extremely large and sustained loads on the data centre that will increase the average workload.
The latest examples of products that can help reduce the complexity of delivering power to equipment includes three-phase PDUs equipped with power monitoring that stretches across the enterprise. Also accessible through an IP connection, they enable an IT team to monitor anything from anywhere—all the way down to the device level. For all these reasons, we expect to see a growing interest in intelligent PDUs in 2019 as customers prepare for the next technology upgrades.
Growing Focus on Airflow Management
As equipment densities continue to increase, airflow management will become an even more vital practice for optimising energy efficiency and maintaining enterprise uptime. Partial containment is still widely deployed, but methods exist to maximise thermal efficiencies (free cooling) by following best thermal management practices. Within the rack, for example, good airflow management requires the use of snap-in filler panels to block open rack-mount spaces as well as air dams to block airflow around the sides and top of equipment. Additionally, passive cooling solutions and vertical exhaust ducts (chimney) help isolate hot exhaust air from cooled air, reducing cooling demands at rack and room levels. With the added benefit of lower energy costs for the end user, we can expect to see greater deployment of this approach to providing equipment cooling performance throughout the data centre.
For this to happen, solution vendors need to provide more education to users on thermal management practices in 2019. The No. 1 thermal management question that vendors hear is “Can I manage higher density without added cooling capacity?” This indicates that many do not yet grasp the benefits of passive cooling in reducing energy consumption and lowering construction and operational costs. There is a common perception that, for high-density environments, it is safer to deploy active cooling devices such as in-row cooling. The majority of data centres continue to oversupply cold air in order to overcome inefficiencies.
We know that customers would like to support higher rack densities within acceptable operational temperatures. At these higher rack densities, however, adding more air conditioning to the room isn’t an effective option. Solutions like Passive Cooling® from Chatsworth Products provide ideal airflow to cool each rack even if the room design limits the amount of airflow volume. They meet the needs of these applications within the architectural limitations of the facility, completely segregating hot and cold air, and can be applied at the cabinet or aisle level, providing increased equipment cooling performance in all elements of the data centre mechanical plant.
Protecting Data and Privacy is a Growing Concern
Finally, security will remain a major area of focus in 2019, as data breaches have become a growing concern among data centre managers, CIOs and end users. The main concern appears to be linked to security from a physical and cyber standpoint. We see a considerable number of enquiries related to electronic access control with two layers of authentication (mostly biometric) and monitoring devices to better control and manage all the devices deployed in the data centre.
In terms of market growth in 2019, we anticipate some positive movement in a number of areas, including cloud applications, colocation, enterprise data centres, and more development at the edge of the network. Cloud and colocation data centres should see the fastest growth and expansion, as large corporate users seek ways to reduce costs related to their IT network deployments and management.
EURECA was a three-year-long (March 2015 - February 2018) project funded by the European Commission’s Horizon 2020 Research and Innovation programme, with partners from the United Kingdom, Ireland, Netherlands and Germany. It aimed to help address the data centre energy efficiency challenge in the European public sector. It would support public authorities to adopt a modern and innovative, procurement approach.
EURECA reinforced the consolidation of newly built and retrofit data centre projects in various European countries, with a focus on Public Procurement for innovation (PPI). Additionally, EURECA supported the development of European standards, best practices and policies related to energy efficiency in data centres and green public procurement. This was done by providing scientific evidence and data.
For further information, please visit the EURECA project website: www.dceureca.eu
The EURECA team specifically designed various innovative, engagement methodologies that used state-of-the-art models and tools. The project supported consolidation, new build and retrofit data centre projects in member states. This resulted in savings of over a 131 GWh/year of primary energy (that’s 52.5 GWh/year of end-use energy) from immediate pilots, supported within the project lifetime in Ireland, Netherlands and United Kingdom (plus various ongoing ones in other EU member states). This equated to more than 27.83 thousand tCO2/year savings, with annual electricity bill savings of €7.159M. This was achieved from working on pilots involving 337 data centres.
EURECA influenced various initiatives related to policy, such as:
EURECA contributed to several standards, including the EN50600 series on data centres. EURECA team members also play an active role in developing the EU Code of Conduct for data centre energy efficiency.
Finally, EURECA trained over 815 stakeholders through 10 face-to-face training events, held across Europe. To know more about EURECA training, please read this article.
The feedback from the European Commission received on the overall evaluation of the project stated that:
"The project has delivered exceptional results with significant immediate or potential impact”.
Data Centre Market Directory
As part of the EURECA project, a vendor-neutral open market directory was established for the European data centre market. This directory currently lists over 250 data centre products and services available to the European market. It is hosted by the ECLab –EURECA coordinator.
So, if your business provides data centre related products and/or services to the European market (irrespective of company size), you are welcome to list your offerings here (DCdirectory.eu) for FREE.
A scientific research on hardware refresh rates was done under the EURECA project. This was referenced by the Amsterdam Economic Board, Netherlands in June 2018. The report provided policy guidance in the field based on the findings of the work above .
In September 2018, European member states voted to implement regulation (EU) No 617/2013 under Directive 2009/125/EC. It focussed on the Ecodesign requirements for servers and online data storage products. Computer Weekly interviewed the EURECA coordinator who was a key player in supporting the legislation, and who shared some of the research findings supporting evidence provided to the policymakers.
EURECA in the news
In February 2018, the Computer Weekly magazine published an interview with the project coordinator on the energy consumption of public sector data centres. The article discussed the EURECA project and revealed for the first time some of the project findings such as the size and running cost of European public sector data centres .
In October 2018, the BBC Reality Check team interviewed Dr Rabih Bashroush about the energy consumption of streaming media and the overall trends in data centre energy consumption. Based on this, they published an article titled: “Climate change: Is your Netflix habit bad for the environment?”
The rich body of knowledge produced by the EURECA project, along with the impact already created, ensures a lasting legacy well beyond the project lifetime.
The EURECA team plans to:
 "Circulaire Dataservers" (in Dutch), Amsterdam Economic Board, Netherlands, June 2018.
 "EU-backed bid to cap idle energy use by datacentre servers moves closer". Computer Weekly, 19 September 2018 https://www.computerweekly.com/news/252448914/EU-backed-bid-to-cap-idle-energy-use-by-datacentre-servers-moves-closer
 “The EURECA moment: Counting the cost of running the UK’s public sector datacentres”. Computer Weekly, 20 February 2018, http://www.computerweekly.com/feature/The-EURECA-moment-Counting-the-cost-of-running-the-UKs-public-sector-datacentres
Dr Umaima Haider is a Research Fellow at the University of East London within the EC Lab (https://www.eclab.uel.ac.uk). She held a visiting research scholar position at Clemson University, South Carolina, USA. Her main research interest is in the design and implementation of energy efficient systems, especially in a data centre environment.
Tool Kit to help Data Centres assess their environmental impact
By Vasiliki Georgiadou, Project Manager, Green IT Amsterdam
The European CATALYST innovation project has launched a tool kit that helps data centres to self-assess their environmental impact. The tool kit makes use of existing and well-known standards like EN 50600 and the EU Code of Conduct. It helps owners and operators to prepare themselves for a future where data centres are subject to new and ever more stringent rules and regulations. But perhaps most importantly it can help them reduce costs and develop new revenue streams by becoming an integral part of smart energy grids.
The data centre industry is changing rapidly. We obviously still need data centres to host cloud services, enterprise applications or simply our holiday pictures. At the same time we see a trend where mounting societal pressure requires data centres to become greener. In other words: to minimize their usage of natural resources like energy and water. How can a data centre owner or operator lower the environmental impact of their facilities if we do not have a well-defined and structured method in place to help them assess how good (or bad) their environmental performance is?
This is why the European CATALYST innovation project has developed a tool kit that helps a data centre to perform a self-assessment of their environmental impact. The test will give a data centre operator a useful classification of how their facility performs relative to a number of well-known standards and a set of new services that will be introduced by the CATALYST project in the near future.
The tool kit in this first edition looks at four themes: renewable energy, heat reuse, energy efficiency and resource management (energy, water and more). Its aim is by no means to develop a new standard. Its purpose is to help the data centre industry, engineering firms, (local) governments and others assess how a data centre is performing relative to existing standards. At the heart of the tool kit lies the so-called Value Added Plan that - based on an initial assessment - helps a data centre facility to improve upon its environmental impact.
The tool kit consists of a number of building blocks and workflows. The workflows guide the user through a process that helps assess the performance of a facility based on standards like EN 50600. The tool kit also uses metrics that have been developed by The Green Grid and a number of research and innovation projects like All4Green, CoolEmAll, GreenDataNet, RenewIT, GENIC, DOLFIN, DC4CITIES and GEYSER.
The assessment provides a data centre with a simple method to keep a score on their performance through its grades: Bronze, Silver and Gold. The tool kit produces a separate grade per theme. In that way data centre owners or operators are able to set their own priorities and decide for themselves which themes are most relevant to their facilities or business model. The earlier mentioned Value Added Plan makes it possible for a facility to put in place the measures that help them achieve a higher grade for a chosen theme. Although the assessment is not meant to compare data centres - it is not a benchmark - it does give individual facilities a very good insight into their environmental performance and impact.
Classifying their environmental impact will also help data centres to better understand and communicate the role they would like to play in the energy transition Europe will be going through over the coming years. More and more we see a trend where data centres might become part of smart grids and heat networks. Many data centres are very well suited to become energy hubs that help store and supply energy and also help stabilize the increasingly complex grids of many European countries. With the financial benefits that might come with such a role. By understanding and improving their environmental impact in terms of energy usage, heat reuse and other variables they might even be able to develop new revenue streams.
By Leo Craig, General Manager of Riello UPS
Making any prediction opens yourself up to the possibility of being proved wrong, although it’s safe to say that 2019 will continue to see demand for data centre capacity grow.
According to Equinix, the volume of data created by businesses across Europe increases by 48% every year. And with the number of connected devices in the UK alone set to top 600 million by 2023, our personal and professional lives depend more on the storage and processing powers of our data centres than ever.
Indeed, there’s currently more than 60,000 m2 of new data centres under construction, which will take total UK capacity to above 900,000 m2 once completed.
This expansion doesn’t come without any cost. Even though the industry goes to great lengths to improve efficiency, data centres are power-hungry beasts. They’ll consume a fifth of all the world’s energy by 2025.
Here in the UK, we face the ongoing challenge of balancing rising demand with limited electrical supply. Our energy mix is changing forever. Nearly 23 GW of thermal capacity has gone offline over the last decade, with a further 24 GW of coal and nuclear set to go by 2025.
Renewables is picking up some of that slack, with sources such as solar and wind supplying a record 31.7% of the country’s electricity in Q2 of 2018.
But the only truly sustainable solution is a smart grid that better matches supply with demand in real-time. A decentralised National Grid, with a diverse range of power sources all interconnected and able to turn off demand or increase generation on a second-by-second basis to deliver secure, stable, and reliable power 365 days a year.
Battery storage isn’t a new concept, although its potential is undeniable. If just 5% of peak demand is met by demand side response (DSR), it would be equivalent to the electricity generated by a new nuclear power station.
Data centres have been slow on the uptake though, with the need for 100% uptime a far bigger priority than the potential rewards for taking part. But is 2019 when that tide turns?
Research earlier this year revealed 83% of mission-critical sites would participate in DSR if it didn’t negatively impact on core activity. Reliability will always come first, but what if there’s a way to take advantage of battery storage and improve system resilience at the same time?
Every data centre has uninterruptible power supplies (UPS) installed as an invaluable backup that keeps their IT equipment running whenever there’s a power problem. But in reality, how often is that backup actually required?
While it’s essential to have the safety net a UPS provides, the fact is it’s an expensive and often underutilised asset. Using a UPS’s batteries for energy storage transforms a piece of reactive infrastructure into something that’s working for your data centre 24-7.
There are a couple of caveats to keep in mind. Firstly, using a UPS to feed energy back to the grid is far easier if the system uses lithium-ion (Li-ion) batteries rather than the more traditional sealed lead-acid (SLA). This does result in a more expensive initial investment. However, the cost of Li-ion has fallen by 79% since 2010 and is predicted to drop further to roughly £50 per kWh by 2030, making it an increasingly viable option.
Li-ion cells deliver the same power density in less than half the space and weight, recharge much faster, and have 50 times the cycle life of SLA. In addition, they can operate at temperatures of up to 40oC – significantly reducing air conditioning costs and potentially eliminating the need for any separate battery room.
Lithium-ion batteries also last for 10-15 years, during which time an SLA would need replacing up to three times, meaning the total cost of ownership over the course of a decade could be anything from 10-40% less.
Secondly, energy storage isn’t for everyone. To be commercially viable, an organisation needs to be a substantial energy user with sizeable annual electricity bills.
The biggest UK data centres consume 30 GWh a year, a bill of £3 million. Of course, that’s the highest end of the spectrum, but there are plenty of facilities with electricity costs in the £500,000-£1 million range suitable to take part.
Exploring The Practicalities
So how can data centres tap into their UPS batteries for energy storage? There are three main incentives the National Grid offers to help balance the network.
There’s the Capacity Market, which is run as an auction in which successful applicants receive long-term revenues to either invest in additional generation or ensure existing power is always available. This is a serious commitment though, as you must be able to deliver the required energy throughout the entire contract.
A more realistic route for data centres could be providing Reserve Services, which basically covers unexpected increases in demand or a lack of generation.
The most common mechanism is the Short Term Operating Reserve (STOR), where you either reduce demand or increase generation with around 10 minutes’ notice. Payments are guaranteed for two years but you must have the capability to deliver at least three times a week.
Other Reserve Services include Demand Turn-Up, where businesses receive a fee to use energy at times where there’s a surplus – mainly overnight, so it’s only an option if you don’t have set requirements for when you need power – and Fast Reserve. This balances out demand surges, such as those massively popular TV programmes where everyone goes to turn the kettle on during the ad break.
Perhaps the most feasible – and currently most rewarding – entry point for data centres is Frequency Response, which ensures a constant grid frequency of 50Hz, with one hertz latitude either side.
Firm Frequency Response (FFR) is a challenge, as you must be capable of feeding the grid or reducing demand by 10 MW within 30 seconds of an event such as a power station tripping out. But Li-ion batteries are ideally suited thanks to their rapid response, fast ramp times, and capacity to continually generate and absorb power.
On average the grid needs 800 MW of FFR capacity, so there’s an ongoing, consistent demand that offers a strong return on the investment of deploying battery storage.
Participating in DSR offers data centres a wide range of benefits. Obviously, there are the incentives on offer from the various mechanisms. Energy bills are lower too because mains supply isn’t required during more expensive peak periods.
The advantages aren’t simply financial though. Think back to the main perceived barrier, the fear that using batteries for anything other than emergency backup would compromise reliability.
But one of the main reasons why a UPS doesn’t kick-in when required is that there’s a battery failure. That’s because SLA cells are notoriously difficult to monitor – if they haven’t been called upon for a while, are you 100% sure they’ll be up to the job when you actually need them?
On the other hand, Li-ion blocks must have advanced battery monitoring. Indeed, each individual cell is monitored to maintain balanced states of charge. So even though the batteries are ‘working harder’ by taking part in DSR, their overall resilience is enhanced thanks to the insight offered by the ongoing monitoring.
Playing Our Part
As we head into the New Year, data centre operators should ask themselves whether they want their UPS system to offer more than just an insurance policy. Wouldn’t they prefer something that generates value too?
A November report by trade association RenewablesUK reveals the number of planning applications for battery storage has increased by 1,653% in the last three years, with nearly 7,000 bids in 2018 alone. Joint-research by Renewable Energy Association and the All-Party Parliamentary Group on Energy Storage predicts UK battery storage will top 8 GW by 2021.
The emerging pattern is clear. 2019 must be the year data centres stop standing on the sidelines and start taking a more active role in proactively powering our future energy needs.
By Steven Carlini, Vice President Innovation and Data Centre IT Division, CTO Office Schneider Electric
The last twelve months have presented some seriously interesting developments in the industry. One trend has gained much traction, as the majority of computing and storage continues to be funneled into the largest hyperscale and centralized data centres.
At Schneider Electric, we’re also are seeing a targeted move of the same Internet giants to occupy colocation facilities as tenants, deploying infrastructure and applications closer to their customers; the consumers of the data. This is driven somewhat by an insatiable demand for cloud computing, accompanied by the need to reduce latency and transmission costs, which has in turn, resulted in the emergence of ‘regional’ edge computing – something that could also be described as smaller, or localized, versions of ‘cloud stacks’.
We don’t believe this will stop at the regional edge. As the trend continues to drive momentum, it’s probable that we will begin to see smaller and more localized versions of these ‘cloud stacks’ popping up in the most unlikely of places.
A good example might be to think about the thought process or theory behind Amazon’s choice to purchase Whole foods. The main driver for this was to enable them to move into the grocery retail business – but what about the choice to use their supply rooms to house local edge data centres for AWS Greengrass, or AWS software? 2019 is the year when we will really see the edge creep closer to the users.
Against this backdrop, I’ve been thinking about what we can expect for 2019 and have come up with a few other predictions.
Need for speed in building Hyperscale data centres
The demand for cloud computing will neither subside nor slow down. 2019 will see it accelerate, which means the Internet Giants will continue to build more compute capacity in the form of hyperscale data centres. Market forces will demand they build these facilities increasingly quickly, meaning 10MW to 100MW projects will have to be designed, built and become operational - from start to finish - in less than a year.
One key to accomplishing such aggressive timeframes is the use of prefabricated, modular power skids. Power skids combine MW sized UPS, switchgear (very large breakers) and management software all in one package that is built in a factory. These systems are pre-tested and placed on a lowboy trailer ready for a reliable, “plug and play” deployment once it reaches the final data centre site.
Since the lead-time for this type of power equipment can in some regions, take up to 12 months, having the solution built and ready to deploy, eliminates any delays during the critical path of the design and construction phase.
You might also consider that within such a facilities data halls, its compute capacity will also become more modular, simply being rolled in place. Schneider Electric has created a rack-ready deployment model, already in use by many colocation data centres around the world.
In this solution freestanding infrastructure backbones are assembled within the data halls where at the same time, IT equipment is ‘racked and stacked’ inside scalable IT racks. These pre-populated IT racks can then be quickly and easily rolled into place, greatly reducing both the time and complexity for customers.
Worlds of IT and Telco data centres colliding
In order for 5G to deliver on its promise of ‘sub 1 ms latency’, it needs a distributed cloud computing environment that will be scalable, resilient and fault-tolerant. This distributed cloud architecture will become virtualized in a new way called cloud based radio access networks (cRAN).
cRAN moves processing from base stations at cell sites to a group of virtualized servers running in an edge data centre. From that perspective, I believe significant buildouts will occur worldwide in metro core clouds throughout 2019 and well into 2020.
You might think of these facilities as ‘regional data centres’, ranging from 500 kW to 2 MW in size, combining telco functionality (data routing and flow management) with IT functionality data cache, processing and delivery.
While they will enable performance improvements, it’s unlikely that they will be able to deliver on the promise of sub 1ms latency due to their physical location. It’s far more likely that we will begin to see sub 1ms latency times when the massive edge core cloud deployment happens in 2021 and beyond. Here localized micro data centres will provide the vehicle for super fast latency, with high levels of connectivity and availability.
Liquid cooling is coming
Artificial intelligence (AI) applications are placing massive demands on processing in data centres worldwide and with it, AI has begun to hit its stride, springing from research labs into real business and consumer applications.
These applications are so compute heavy that many IT hardware architects are using GPU’s as core processing, or as supplemental processing. The heat profile for many GPU-based servers are double that of more traditional servers, with a TDP (total design power) of 300W vs 150W, which is one of the many drivers behind the renaissance of liquid cooling.
Liquid cooling has of course been in use within high performance computing (HPC) and niche applications for a while, but the new core applications for AI are placing increased demands in a much bigger and more intensive way.
Data Centre management moves to the cloud
DCIM systems were originally designed to gather information from infrastructure equipment in a single data centre, being deployed as an on-premise software solution.
While newer cloud-based management solutions will quite obviously live within the cloud, they enable the user to collect larger volumes of data from a broader range of IoT-enabled equipment. What’s more, the software can be used in number of large or small sized data centres, whether in one or thousands of locations.
These new systems, often described as DMaaS or Data Centre Management as a Service, use Big Data analytics to enable the user to make more informed, data driven decisions that mitigate unplanned events or downtime - far more quickly than traditional DCIM solutions. Being cloud-based, the software leverages “data lakes”, which store the collected information for future trend analysis, helping to plan operations at a more strategic level.
Cloud-based systems also simplify the task of deploying new equipment and upgrading existing installations with software updates across a number of locations. In all cases, managing upgrades on a site-by-site basis, especially at the edge, with only on-premises management software leaves the user in a challenging, resource intensive and time consuming position.
Digitalisation World's Philip Alsop interviews Kevin Brown, Senior Vice President of Innovation and Chief Technology Officer for the €3.7 billion IT Division at Schneider Electric, about how the edge data centre market is developing, with particular emphasis on the importance of the local edge – the interface between an enterprise and its customers.
DW talks to Jason Yates – Technical Services Manager, Riello, looking at the practicalities of the Smart Grid and how data centres have a role to play – just as soon as data centre managers become comfortable with the idea!
The recent SVC Awards dinner was a great opportunity to recognise success within the IT industry, with a special focus on both storage networking and the Cloud. A drinks reception, kindly sponsored by Touchdown Public Relations, was followed by an excellent three-course dinner, before celebrated comedian, Zoe Lyons (currently on her Entry Level Human tour) entertained the 200+ guests with some excellently-researched IT jokes, alongside a torrent of observational humour that was the perfect foil for the awards presentations which followed. Here we feature the awards winners.
Full details of winning entries can be found on the SVC Awards website: www.svcawards.com
Award presented by: Peter Davies, Sales Manager DW portfolio at Angel Business Communications
Runner up: Curvature / Carl Christensen
WINNER: Scale Computing /Genting Casinos
Award presented by: Phil Alsop, Editor of the DW stable of products at Angel Business Comminations
Runner up: Altaro / Beeks Financial CloudWINNER: Excelero / teuto.net
Sponsored by: NAVISITE
Award presented by: Aaron Aldrich, Director Client Success
Runner up: RISK IQ / Standard Bank Group
Award presented by: Peter Davies
Runner up: Cleverbridge / SmartBearWINNER: NODE 4 / Forest Holidays
SPONSORED BY: VOLTA DATA CENTRES
Award presented by: Steve Youell, Account Director
Runner up: Qudini supporting NatWestWINNER: Six Degrees supporting AdvantiGas
Award presented by: Phil Alsop
Runner up: Continuity SoftwareWINNER: ANUTA NETWORKS
Sponsored by: NGD Systems
Award presented by: Jos Keulers – EMEA Sales Director
Runner up: CloudBerryWINNER: ZERTO
Sponsored by: Touchdown PR
Award presented by: James Carter, CEO
Runner up: METCloudWINNER: Barracuda
Award presented by: Peter Davies
Runner up: Exagrid
WINNER: Schneider Electric
Sponsored by: TechData
Award presented by: ADAM WILCOX, UK Cloud Business Manager
Runner up: CloudBerry
Sponsored by: TOUCHDOWN PR
Award presented by: JAMES CARTER, CEO
Runner up: Pure StorageWINNER: NGD Systems
Sponsored by: CTERA
Award presented by: Terry Schoen, Strategic Account Manager, UK&I
Runner up: Open E
WINNER: VIRTUAL INSTRUMENTS
Award presented by: Phil Alsop
Runner up: RA Information SystemsWINNER: QUDINI
Award presented by: Peter Davies
Runner up: Premium Soft CybertechWINNER: 6Point6 Cloud Gateway
Sponsored by: Sovereign Business Systems
Award presented by: Joanna Sedley-Burke, MD
Runner up: MoblicitiWINNER: PARK PLACE TECHNOLOGIES
Sponsored by: TechData
Award presented by: Adam Wilcox, UK Cloud Business Manager
Runner up: Pragma
WINNER: PURE STORAGE
Award presented by: Phil Alsop
Runner up: ImpartnerWINNER: EACS
Award presented by: Peter Davies
Runner up: KaseyaWINNER: OPEN GEAR
Sponsored by: Spreckley
Award presented by: Richard Merrin, Managing Director
Runner up: Claranet
WINNER: HYVE MANAGED HOSTING
Award presented by: Phil Alsop
Runner up: Volta Data Centres
WINNER: Sovereign Business Integration Group
Sponsored by Spreckley
Award presented by: Richard Merrin, MD
Runner up: NetApp
Award presented by: Phil Alsop
Runner up: Bitdefender
Sponsored by: Schneider Electric
Award presented by: Paul Lowbridge, Business Development Manager
Runner up: Cloudian
Award Presented by: Pete Davies
Angel Business communications have announced the categories and entry criteria for the 2019 Datacentre Solutions Awards (DCS Awards). The DCS Awards are designed to reward the product designers, manufacturers, suppliers and providers operating in data centre arena and are updated each year to reflect this fast moving industry. The Awards recognise the achievements of the vendors and their business partners alike and this year encompass a wider range of project, facilities and information technology award categories together with two Individual categories and are designed to address all the main areas of the datacentre market in Europe.
The DCS Awards categories provide a comprehensive range of options for organisations involved in the IT industry to participate, so you are encouraged to get your nominations made as soon as possible for the categories where you think you have achieved something outstanding or where you have a product that stands out from the rest, to be in with a chance to win one of the coveted crystal trophies.
The editorial staff at Angel Business Communications will validate entries and announce the final short list to be forwarded for voting by the readership of the Digitalisation World stable of publications during April. The winners will be announced at a gala evening on 16th May at London’s Grange St Paul’s Hotel.
The 2019 DCS Awards feature 26 categories across four groups. The Project Awards categories are open to end use implementations and services that have been available before 31st December 2018. The Innovation Awards categories are open to products and solutions that have been available and shipping in EMEA between 1st January and 31st December 2018. The Company nominees must have been present in the EMEA market prior to 1st June 2018. Individuals must have been employed in the EMEA region prior to 31st December 2018.
Nomination is free of charge and all entries can submit up to four supporting documents to enhance the submission. The deadline for entries is : 1st March 2019.
Please visit : www.dcsawards.com for rules and entry criteria for each of the following categories:
DCS PROJECT AWARDS
Data Centre Energy Efficiency Project of the Year
New Design/Build Data Centre Project of the Year
Data Centre Consolidation/Upgrade/Refresh Project of the Year
Cloud Project of the Year
Managed Services Project of the Year
GDPR compliance Project of the Year
DCS INNOVATION AWARDS
Data Centre Facilities Innovation Awards
Data Centre Power Innovation of the Year
Data Centre PDU Innovation of the Year
Data Centre Cooling Innovation of the Year
Data Centre Intelligent Automation and Management Innovation of the Year
Data Centre Safety, Security & Fire Suppression Innovation of the Year
Data Centre Physical Connectivity Innovation of the Year
Data Centre ICT Innovation Awards
Data Centre ICT Storage Product of the Year
Data Centre ICT Security Product of the Year
Data Centre ICT Management Product of the Year
Data Centre ICT Networking Product of the Year
Data Centre ICT Automation Innovation of the Year
Open Source Innovation of the Year
Data Centre Managed Services Innovation of the Year
DCS Company Awards
Data Centre Hosting/co-location Supplier of the Year
Data Centre Cloud Vendor of the Year
Data Centre Facilities Vendor of the Year
Data Centre ICT Systems Vendor of the Year
Excellence in Data Centre Services Award
DCS Individual Awards
Data Centre Manager of the Year
Data Centre Engineer of the Year
Nomination Deadline : 1st March 2019
As we look forward to another year of potentially great change and uncertainty, the managed services industry can take some comfort from its apparent resilience as a model. All the current thinking and forecasting around 2019 in the IT industry suggests a continuing pressure on customers who know they need to become more productive, but are not sure where to spend their IT investment and when. Managed services is uniquely placed to deliver IT solutions in an effective way.
Yet, if anything the pressures are rising for customers; just as in a rising market, many stay profitable enough not to have to think about changing infrastructure, while IT mistakes and wrong decisions matter less, a year of great change may eventually find out those poor deals and false savings. Hence the hesitant IT customer confidence, especially in areas which have less history behind them such as AI and IoT. The thinking is that it is better to hold on to existing technologies which are at least better understood through use.
At the same time, however, the pressure from senior management and those with a strategic view is asking IT to do more with less; squeezed resources will not deliver change, and IT departments themselves may have become more siloed in the last year.
Working with scarce resources should be something that the managed services providers well understand; their industry is all about scale and efficient delivery; they should be able to talk to senior management in customers about the inherent effectiveness of their IT model and how this can be shared with the customer and the customer’s customer.
Yet the promise of technological delivery of riches has not always lived up to the expectations, and research has shown that a half-hearted or lightweight approach to innovation does not yield such good results. The recent Capgemini Research Institute’s “Upskilling your people for the age of the machine” study found that automation has not improved productivity because it has not been a part of the full digital transformation of business processes.
Companies do need to do their homework in terms of change management, in many cases. The need to adopt technology that links to and supports the implementation of a strategy should be a no-brainer. But this is not about new technology, it’s about the basics of change management and communication from leaders, which have been discussed for decades.
The managed services industry may need to encourage new thinking along these lines even more in 2019. It should be pushing at an open door: Equinix research shows that almost three quarters (71%) of organisations are likely to move more of their business functions to the cloud in coming years, they may still need answers that address their security fears, however. And they need help managing that change process in their businesses.
It will be those managed services companies with expertise in particular vertical markets or with a particularly strong customer relationship who do best in this. They are able to talk in meaningful ways about real solutions and engage with realistic customer expectations. They will also hopefully not be afraid to say when a transformation is not going far enough and what real gains could be made by further moves.
For this to work properly they need a clear understanding of the nature of change management in customer organisations, coupled with the required changes in working practices that deliver a safe and secure environment.
The agenda for the 2019 European Managed Services and Hosting Summit, in Amsterdam on 23 May, aims to reflect these new pressures and build the skills of the managed services industry in addressing the wider issues of engagement with customers at a strategic level. Experts from all parts of the industry, plus thought leaders with ideas from other businesses and organisations will share experiences and help identify the trends in a rapidly-changing market.
DW talks to Chris Ashworth, CIO of parcel delivery company, Hermes.
1. Does Hermes have an overall digitalisation and/or business transformation strategy?
Yes, our transformation strategy is continually evolving. As a business we never stand still and there is lots to do. Much in the same way that we are required to constantly grow our fleet of vehicles, and expand the capacity of our distribution hubs, we must also take advantage of new and exciting innovations to meet demands of the increasingly tech-savvy consumer. In many ways the development of new technology acts as a catalyst for change, dictates our priorities and alters how we label ourselves – are we still regarded as a parcel delivery firm or a technology solutions business? I think the latter.
2. If so, what are the main objectives?
To support the growth of the UK operation and to satisfy increased retailer and customer demand for richer products, delivery information in real time and a wider range of services and solutions for both retailers and consumers.
3. And what are the company's main objectives around modernising IT usage?
Historically, Hermes lacked an over-arching IT strategy to support the growth of the UK operation. This year, we launched our Hermes Technology Transformation programme aimed at re-platforming the UK business on a scalable, real-time infrastructure. Our tech investment was designed to improve our services by migrating IT systems to strategic hosting platforms. The transformation provided an opportunity for us to rationalise the technology and reduce our footprint in the data centre. Using alternative cloud hosting providers to the data centre was our preferred option.
4. In more detail, please can you tell us any plans you might have around AI and intelligent automation in general?
This is definitely an important part of our roadmap but it’s about using it in the right places, rather than just getting excited about technology. As an industry we are extremely good at delivering parcels despite the low margins, but where things get complicated is when things go wrong. Good customer service is a strong USP for us and automation can help with the simple stuff. For example, we are trialling a ‘self-service’ approach where the first six questions of any query can be automated. This can save us three minutes per query whilst saving cost and it’s also a much faster service.
We were also the first parcel company in the UK to fully integrate our end-to-end tracking solution with Amazon’s Echo smart speakers. Consumers can use voice commands via Alexa-enabled devices, such as Amazon Echo or Echo Dot, to hear updates on where their parcel is. This unique functionality is part of our ongoing commitment to providing retailers with an increasingly innovative portfolio of services and solutions, plus enhancing the overall delivery experience for the UK’s growing number of online shoppers. Other features that will be delivered soon include a ParcelShop finder, which determines the location of the nearest Hermes ParcelShop; the ability to set a designated safe place or choose to have your parcel delivered to a neighbour when not at home; and an enhanced returns process. Consumers asking Hermes to return their order via the Alexa Skill will be emailed a QR code and PIN that allows them to print a returns label at their nearest ParcelShop.
The integration was engineered at our Innovation Lab in the centre of Leeds, which is specifically tasked with developing a range of progressive products that we hope will change the delivery landscape.
Exploring new and exciting tech is an important part of our remit and as digital assistants like Siri, Cortana and Alexa have been developed to drive increased convenience, our research tells us that there will be a future shift away from screen interactions in favour of conversational interface technology.
5. And how does Hermes view (and use) the Cloud and/or managed services?
Last year, we successfully re-platformed and tuned those under-performing elements of the IT infrastructure and migrated from the traditionally architected data centre to a new data centre in the cloud via AWS (Amazon Web Services). All infrastructure was transformed into a Cloud hosting platform. We also automated scaling of production workloads to ensure appropriate resources are provisioned to suit the demand. We introduced consistent workflows for our IT Service to manage incidents, requests, problems and changes and introduced IT Service KPIs, Service Level Agreements and a self-service knowledgebase.
This has resulted in a stable cloud infrastructure hosting platform to support existing and new UK IT solutions during times of intense activity, such as Black Friday, as well as a cost effective outsourced IT hosting framework and managed service. We have also introduced a self-service portal for employees to view and log incidents which also enables Service Management to efficiently manage and measure problems. This increases response and closure times through the devolution of control to the cloud.
6. Presumably, mobile IT is a major focus for Hermes?
Yes, our customer service strategy is omni-channel because it’s really all about convenience for the end customer. We now have a Hermes mobile app that provides real time parcel tracking and SMS messaging, plus the option to divert your parcel to another address or a safe place. Customers can also choose to save delivery preferences, receive enhanced notifications and view safe place photos.
7. Autonomous vehicles and drones - how do you view these in terms of their potential into the future?
We have been part of a testing programme for the use of self-driving delivery robots in London, in partnership with Starship Technologies. It follows on from a project which has seen Hermes Germany and Starship Technologies test parcel delivery by robot in the Ottensen, Volksdorf and Grindel suburbs of Hamburg. We used the testing period in the UK to better understand how the robots could enhance our ability to offer an increased range of on-demand solutions in future. We believe that self-driving robots could offer Hermes greater scheduling and tracking capabilities, and that they’re a viable alternative to drones, which are obviously difficult to deploy in built-up areas due to strict aviation laws, as well as their ability to only carry small and light packages.
Cloud Gateway’s platform sits between the telcos and the Cloud Service Providers to help end users leverage the speed and agility of digital transformation and, ultimately, true digital business – something that was highlighted by the company’s recent win at the SVC Awards.
DW talks to Interxion’s Marseille Managing Director Fabrice Coquio.
Please can you give us a bit of background as to the reasons behind the decision to open a data centre in Marseilles?
Several of our most important customers wishing to distribute content to the emerging markets of Africa, Middle East and Asia have decided to settle in Marseille, and it did not take us long to measure the city’s potential as a global content and distribution hub.
More specifically, can you tell us something about the importance of Marseille in terms of its geographical location?
Over the last twenty years, Marseille has become France’s second digital hub, as well as the Mediterranean’s main interconnection point, towards which converge all data exchange roads, including subsea cables linking Europe to Africa, the Middle East and Asia – through which 90% of telecom and IP flows transit – as well as underground cables and satellites. Today, this strategic position allows all operators to access 4.5 billion users efficiently.
Presumably the data centre has potential as an international peering point?
Interxion’s campus participates in the growth of Marseille as a global-scale, internationally-oriented hub: by providing the adequate IT infrastructure to support their expansion, it allows organizations who want to reach all markets connected to Marseille to do so. In Marseille, you have direct access to not only France-IX but also to foreign Internet Exchanges: NetIX, DE-CIX, NL-ix, etc. We forecast an increase in the number of peering players, and the traffic capacity will grow as well. We will accompany this growth that contributes to develop this content hub, which already hosts major CDNs.
In terms of the data centre building itself, it makes use of a WWII U boat base?!
Amazing, isn’t it? MRS1 and MRS2 will soon be joined by MRS3, a brand-new data centre built in a former WWII submarine base. Destroying or rehabilitating the building would have cost far too much money for any organization, but a modular data centre can take full advantage of its structure and its 5.5-meter-thick concrete roof. In total, MRS3 will offer 7,500 square meters of equipable space.
And how would you characterise the data centre facilities in terms of the infrastructure available – power, connectivity, high density etc.?
The campus brings together a unique mix of both national and international customers from the cloud, telecommunications and digital media sectors, as well as companies enjoying an interconnection with local fiber optic networks, Internet exchanges, CDNs and content platforms. In Marseille, we can cater to the most demanding high-density power needs without compromising efficiency (99.999% availability SLA, 2N or N+1 configurations for all critical systems, etc.) and sustainability: we commit to 100% sustainable energy and have exceptional PUE measures. Moreover, for our MRS2 data centre, we are working on leveraging ground water cooling from an underground river.
And how does this location and facility infrastructure compare to alternative data centres that customers might want to be leveraging to address the Middle East and Africa market?
If you want a very differentiating criterion, it is simple: 13 submarine cable systems currently delivering 152Tbps of potential capacity into Marseille, and soon more, with at least 4 new projects with RFS dates prior to 2020, that will see this potential capacity double to over 300Tbps. Combined with over 30 backhaul providers connecting Marseille to key European cities, Marseille is the only city in the Mediterranean to propose such an interconnection hub.
You anticipate that the Marseille location is poised for massive growth in terms of the content creation potential, with the explosion in Middle East and African mobile content demands?
Our role in the IT industry allows us to observe market developments, and we can predict that global demand for content will continue to grow in emerging markets with the massive use of smartphones. In Africa, mobile video consumption doubled between 2015 and 2016. In the Middle East, 70% of people consume video through their phones at least once a week, and Asia is seeing strong demand for gaming and sports-related content. Marseille is well located to serve these markets, delivering data centre infrastructure needed to support content distribution, along with the local public authorities that aim to support the development of Marseille as a digital hub. From Marseille, you can directly connect to 43 countries and approximately 4.5 billion people in EMEA and APAC.
Similarly, distribution has significant potential?
Distribution is key, and Marseille’s strategic position is a very important asset, as it allows all operators to spread their content between the European capitals of the digital sector – Frankfurt, London, Amsterdam and Paris – and the continents that are most in demand for it: Africa, the Middle East and Asia.
Over the last 20 years, when the first cables were installed, Marseille has transformed itself. Originally a transit city, when only a few racks were sufficient to ensure the successful liaison between subsea and underground cables, it has become a content city and concentrates today on much bigger platforms. In Marseille, the current deployment challenges directly concern the GAFA and other major players of the global economy.
This paradigm shift is an excellent news for the city, as it implies the companies’ needs in terms of data hosting, exchange and distribution will significantly increase in the years to come. It is precisely to address these new needs that players such as Interxion intervene, to boost the city’s data center technical capabilities. The objective: providing the city with the necessary layer of infrastructure that will support the multiplication of data exchange, at a global level and more specifically towards Africa, the Middle East and Asia.
So, the industries Interxion is looking to attract would include broadcasting, gaming, adtech and social media?
That is absolutely correct! And actually, it is already the case. After the arrival of connectivity providers and major cloud platforms in Marseille, those industries quickly measured the potential of the city.
Any other industries or applications well-suited to the Marseille facility?
Basically, all players in the digital economy wishing to either deliver content or reach in any way to the emerging markets of Africa, the Middle East and Asia will need Marseille to grow! Our clients come from all sectors.
More generally, can you tell us how the Marseille data centre fits into Interxion’s overall data centre portfolio?
Marseille is one of two locations we have in France. Its strategic importance derives from the submarine cables that connect directly into our campus giving direct access to extensive European and international markets. Marseille also sits along a major connectivity route to the FLAP (Frankfurt, London, Amsterdam and Paris) cities, the traditional data powerhouse locations in Europe, in which Interxion has data centres. Interxion operates 50 facilities in 13 European countries, but what makes Marseille unique is its location at the crossroads of Europe, Africa, the Middle East and Asia.
And, without giving any secrets away (!), can you share any of Interxion’s plans for further expansion and/or data centre facility developments (technology refreshes, for example)?
Our innovation effort is constant! This ranges from security technologies to the continuous improvement of the energy efficiency of our facilities. Without revealing a secret, I told you that submarine cable projects linking Marseille to other hubs of the Mediterranean and beyond, to all continents (including America) are in progress. We will of course accompany this development. Indeed, the Port of Marseille Fos is working on a solution which allows subsea cable operators and consortium members to benefit from a pre-equipped and pre-authorized landing station in the Marseille harbor area.
Moving back to the Marseille data centre, what has been the reception to date from potential customers?
There, for once, I would reveal secrets if I gave you too precise information. I can tell you one thing: the occupancy rate is very satisfying, and the demand is going up, so much that we are going to build the last two phases of MRS2 in one go, the first one of which was inaugurated last May.
Indeed, are you able to tell us about any customers who have already committed and/or are about to commit to the facility?
I'm sure many of your fellow journalists have been interested in this and have gleaned some information, but for my part I cannot reveal our customers’ names. We host more than 130 connectivity providers, 4 Internet exchange points, 8 local fiber optic networks, 11 CDNs and content platforms and the main cloud platforms.
DW talks to Travis Irons, Director of Engineering, at Server Technology, about the company’s most recent PDU development.
1) How big an issue is the constant adding/changing configurations of hardware devices in data centre racks in terms of overall data centre management?
Typical lifespan for IT equipment is 3-5 years, where the power infrastructure is much longer. Since the power requirements of the next cycle of IT equipment is unknown, it makes it difficult to properly plan the power systems.
2) Presumably, with the increasing demand for flexible, scalable and dynamic IT workloads, this problem is only going to become worse over time?
There appears to be no shortage of innovation in the hardware and software of the compute, storage and network gear coming on the market. This will exacerbate the challenge of providing the right power systems to run the equipment.
3) And a constantly changing IT infrastructure demands similar flexibility from the data centre facilities – not least the PDUs?
The dynamic evolution of IT equipment certainly drives the need for a flexible power infrastructure from end to end, and certainly at the last link in the chain, the rack PDU.
4) Up until now, how has the data centre manager been able to best cope with the matching of IT and power/PDU requirements – just buying more PDUs?!
We see our customers taking multiple approaches to adapt the PDU to the changing landscape in the rack. They often just add additional PDU’s over time, but this requires more feeds or drops from the upstream power panel and increases the number of PDU SKU’s that they have to manage. Another approach is to over provision the PDU with the hope that it will always stay ahead of the equipment, but this requires a larger initial capital outlay and a larger PDU which occupies valuable space in the rack.
5) And allowing for C13 and C19 outlets and C14 and C20 plus variations is an expensive headache?
Predicting the mix of C13 and C19 outlets that will be required in future rack configurations is an ongoing challenge for the DC manager. A common approach is to buy a PDU with a large number C19’s, and use adapter cords to ‘step down’ from the C19 to a C13. The downside of this approach is that the adapter cords are heavy gauge, expensive, difficult to route, and can block airflow. Often those cords are banned from use by internal or local agency policies.
6) Server Technology has launched the HDOT Cx – how does this help to address the problems previously discussed?
HDOT Cx is a C13 and C19 in one combination outlet. No adapters are required to connect a C14 or C20 cord. This allows the DC manager to have confidence that the PDU will have the right mix of outlets available as the equipment changes. The manager can keep one SKU in stock that will function across the compute, storage and network racks, as well as provide unmatched versatility in the lab environment.
7) Put simply, why has no one thought of doing this before now?!
Good question! Our customers have been asking for it as long as I have been in the industry, which is going on 12 years. Although simple in concept, employing a robust design that meets all US and International safety certifications was not a trivial undertaking.
8) Advantages of this new PDU development include lengthening the PDU lifespan?
From a standpoint of obsolescence due to not having the correct mix of outlet types, the Cx definitely increases the lifespan of the PDU.
9) And reducing the number of PDU SKUs required?
Reducing the number of PDU SKU’s is one of the top requests we get. It reduces the amount of spares that need to be kept in stock, the training required by internal personnel, and simplifies re-ordering.
10) And being able to support any hardware configuration – giving peace of mind?
Certainly knowing you’ve got a flexible PDU in your stock takes away a major concern.
11) And how has the market received the HDOT Cx to date – what has been the reaction?
The response has been overwhelmingly positive. It’s really an obvious advantage to have a combination outlet. Significant improvements that are mechanical in nature don’t come along too often. The Cx outlet represents one of those large strides.
12) Can you give us a little bit of background as to the culture within Server Technology that seems to foster regular PDU innovations?
Server Technology was founded by an Electrical Engineer, and has historically fostered an engineering emphasis. We have an aggressive patent filing stance, and encourage and reward innovation throughout the organization.
13) And do you think that PDUs are sufficiently valued within the data centre environment, or still somewhat neglected in favour of other priorities?
There are both camps in the industry and everything in between. Many data centers simply want power in power out, where others have made the PDU a central access point for capacity planning, environmental monitoring, and security.
14) What else can we expect from Server Technology over the next year or so in terms of PDU and/or wider data centre innovations?
Additional products will be released next year with HDOT Cx outlets. We also have a number of significant innovations in the works. Stay tuned!
Server Technology's multiple award winning High Density Outlet Technology (HDOT), has been improved with our Cx outlet. The HDOT Cx PDU welcomes change as data center equipment is replaced. The Cx outlet is a UL tested hybrid of the C13 and C19 outlets, accommodating both C14 and C20 plugs. This innovative design reduces the complexity of the selection process while lowering end costs.
New ways of working powered by technology means that your organisation must quickly respond to customer expectations to compete and remain relevant. Likewise, your service desk must be nimble, to support shifting organisational priorities, capitalise on new opportunities, and satisfy growing end user demands.
By Matt Klassen, Vice President, Product Marketing, Cherwell Software.
This “need for speed” runs counter to traditional approaches to ITSM that emphasise risk mitigation and control over speed and agility. However, you can strike the right balance between going faster and maintaining control.
1) Apply Agile & Lean Practices to ITIL
Look at ITIL as a set of guidelines, not hard rules. Integrate agile and lean principles into your approach and apply those aspects of ITIL that best serve your enterprise. For example, get your team thinking not in terms of process, but in terms of outcomes and ensure your KPIs emphasise these outcomes. Similarly, minimise work in progress such as open tickets, change and service requests, and so on. If queue volume is too great, it’s time to identify root causes and adjust processes accordingly. To aid this you can begin with out-of-the-box (OOTB) ITIL processes to speed implementations or seek a “low-code” ITSM platform that enables you to easily customise these processes to suit your unique needs.
Agility needn’t be overwhelming - start simple with a short list of the most impactful ITIL processes to optimise and standardise. Then rinse and repeat, taking on just a couple processes at a time.
2) Automate Routine, Simple Tasks and Processes
Develop an automation roadmap, but use discretion. Cherry pick simple, but frequently used processes first, learn from your mistakes, and get more sophisticated as you go. Remember that automation will not fix a process that’s broken to begin with; ensure your processes are bulletproof before you automate them. Enlist the help of your service desk staff to seek out manual, routine tasks that take up a lot of time and can be automated with minimal effort.
By establishing performance metrics, you can ensure automations are, in fact, saving time and improving outcomes. Remember to make use of your ITSM solution’s pre-built automations and ensure your ITSM platform enables service desk staff to quickly create and configure custom automations. Similarly take advantage of OOTB integrations with endpoints like Active Directory for a password reset, or AWS or Azure for a cloud VM to save time and use RESTful APIs to extend and integrate workflows beyond the service desk.
3) Implement a Low-Code ITSM Platform
If your current ITSM platform requires a team of developers to support basic requirements, it may be time to look for an alternative. Your service management solution should be built on a low-code platform that enables service desk administrators to quickly configure portals, dashboards, forms, workflows, and tasks as well as quickly add new features, extend capabilities, and integrate with third-party products. Minimal training is required for service desk admins to be effective with a low-code service management solution. Also, make sure to engage with your vendor’s extended user community to learn and share best practices.
It's important to estimate the costs required, both time and money for development resources to administer, configure, customise, and perform ongoing maintenance on your current ITSM tool. Where configuration is needed, leverage a low-code service management platform that provides a WYSIWYG (What You See Is What You Get) editor and generates configuration metadata separate from the code base. Alongside, tap into the vendor’s library of pre-built integrations and extensions as well as RESTful APIs to enable rapid integration with other products.
4) Establish a Self-Service Portal Employees Want to Use
Give your employees the self-sufficiency they want by taking a “shift left” approach to the service desk. Establish a best-in-class self-service portal that enables users to quickly resolve their own problems and check the status of tickets and service requests. UX experts either from within or outside your organisation can optimise the user experience and ensure the portal is as user-friendly as possible. Alongside, provide plenty of end user training. It’s also important to continue to offer email, phone, chat, walk-up, and desk-side service. As portal utilisation increases, reliance on these channels should naturally diminish.
This can be complemented by a comprehensive knowledge base with intelligent search and curated knowledge. Take it a step further by allowing users to comment, vote on the usefulness of entries, and suggest new ones. You can also offer a peer-to-peer discussion board which service desk personnel can moderate and contribute to. Finally, it’s important to establish meaningful KPIs such as reduction in call volume, ticket volume, and customer satisfaction to measure the effectiveness of the portal and identify areas for improvement.
5) Extend ITSM practices and systems beyond IT
In situations where line-of-business (LOB) processes are immature, leverage your ITSM platform, along with its companion HR Case Management, Facilities, or Security solutions. Centralising IT and non-IT request handling and workflow automation on a single platform enables a better user experience by presenting services in a common portal and facilitates improved service delivery on the back end by creating a single system of record.
Approach and collaborate with stakeholders in other departments to identify processes and tasks that can be automated with your ITSM platform. Form a multidisciplinary task force to collaborate on complex LOB workflows, especially those spanning multiple departments. Again, start with simple non-IT requests and workflows to prove the viability of utilising ITSM systems and principles outside of IT and establish KPIs within each department to measure efficiency and effectiveness of service delivery.
Break free and gain speed and agility
IT teams are being slowed down by rigid processes, a lack of automation, and tools that are difficult to customise and extend. You must break free from what’s no longer working to gain the speed and agility needed to lead your business through its digital transformation journey. Perhaps the single most important thing you can do to accelerate service delivery is to ensure your service management platform is propelling you forward—not holding you back. When you’re supported by a lightweight, flexible service management platform, you’re more efficient, more agile, and able to go faster than you’ve ever gone before
The potential for AI to improve business performance and competitiveness demands a different approach to managing data lifecycle.
By Kurt Kuckein, director of marketing at DDN.
Analytics, AI and Machine Learning continue to make extensive inroads into data-oriented industries presenting significant opportunities for Enterprises and research organisations. However, the potential for AI to improve business performance and competitiveness demands a different approach to managing the data lifecycle.
Companies who are planning to fully embrace these innovative technologies in the future, should consider how data which is at the heart of their competitive differentiation will require extensive scaling. While current workloads may easily fit into in-node storage, companies that are not considering what happens when Deep Learning is ready to be applied to larger data sets will be left behind.
Here’s five key areas to strongly consider when creating and developing an AI data platform that ensures better answers, faster time to value, and capability for rapid scaling.
Saturate Your AI Platform
Given the heavy investment from organisations into GPU based computer systems, the data platform must be capable of keeping Machine Learning systems saturated across throughput, IOPS, and latency to eliminate the risk of this resource being underutilised.
Saturation level I/O means cutting out application wait times. As far as the storage system is concerned, this requires different, appropriate responses depending upon the application behavior: GPU-enabled in-memory databases will have lower start-up times when quickly populated from the data warehousing area. GPU-accelerated analytics demand large thread counts, each with low-latency access to small pieces of data. Image-based deep learning for classification, object detection and segmentation benefit from high streaming bandwidth, random access, and fast memory mapped calls. In a similar vein, recurrent networks for text/speech analysis also benefit from high performance random small file access.
Build Massive Ingest Capability
Ingest for storage systems means write performance and coping with large concurrent streams from distributed sources at huge scale. Successful AI implementations extract more value from data, but also can gather increasingly more data in reflection of their success. Systems should deliver balanced I/O, performing writes just as fast as reads, along with advanced parallel data placement and protection. Data sources developed to augment and improve acquisition can be satisfied at any level, while concurrently serving Machine Learning compute platforms.
Flexible and Fast Access to Data
Flexibility for AI means addressing data maneuverability. As AI-enabled data centres move from initial prototyping and testing towards production and scale, a flexible data platform should provide the means to independently scale in multiple areas: performance, capacity, ingest capability, lash-HDD ratio and responsiveness for data scientists. Such flexibility also implies expansion of a namespace without disruption, eliminating data copies and complexity during growth phases. Flexibility for organisations entering AI also suggests good performance regardless of the choice of data formats.
Scale Simply and Economically
Scalability is measurable in terms of not only performance, but also manageability and economics. A successful AI implementation can start with a few terabytes of data and ramp to petabytes. While flash should always be the media for live AI training data, it can become economically unfeasible to hold hundreds of terabytes or petabytes of data all on flash. Alternate hybrid models can suffer limitations around data management and data movement. Loosely coupled architectures that combine all-flash arrays with separate HDD-based data lakes present complicated environments for managing hot data efficiently.
Integration and data movement techniques are key here. Start small with a flash deployment and then choose your scaling strategy according to demand; either scaling with flash only, or combining with deeply integrated HDD pools, ensuring data movement transparently and natively at scale.
Understanding the Whole Environment
Since delivering performance to the application is what matters, not just how fast the storage can push out data, integration and support services must span the whole environment, delivering faster results. This underscores the importance of partnering with a provider that really understands every aspect of the environment—from containers, networks, and applications all the way to file systems and flash. Expert platform tuning to your workflow and growth direction is paramount to removing barriers in your path to value from AI, and enabling the extraction of more insights from data.
The new AI data centre must be optimised to extract maximum value from data; that is, ingesting, storing, and transforming data and then feeding that data through hyper-intensive analytics workflows. This requires a data platform that isn’t constrained by protocol or file system limitations, or a solution that ends up being excessively costly at scale. Any AI data platform provider chosen to help accelerate analytics and Machine Learning must have deep domain expertise in dealing with data sets and I/O that well exceed the capabilities of standard solutions, and have the tools readily at hand to create tightly integrated solutions at scale.
With 2019 just around the corner, this issue of Digitalisation World includes major coverage of technology predictions for the new year. Part 1.
Expert advice: invest in tech now for a smoother 2019
As the start of another year rolls around again, IT teams in companies of every size are busy looking for the latest trends that 2019 will bring that will ultimately influence their buying decisions. All of this will be geared towards supplying their customers with the latest products and services. But as each tries to get ahead of the competition, it is vital to establish what the key technologies are going to be for the coming year, and the priorities that business leaders should focus on. With this in mind, nine industry experts provide their opinions on what they predict the biggest developments will be in 2019.
Bob Davis, CMO at Plutora, explains how automation will be a game changer for the coming year, as companies work to become more efficient. He says: “In the coming year, the quality of software will be as much about what machine learning and AI can accomplish as anything else. In the past, delivery processes have been designed to be lean and reduce or eliminate waste but to me, that’s an outdated, glass-half-empty way of viewing the process. This year, if we want to fully leverage these two technologies, we need to understand that the opposite of waste is value and take the glass-half-full view that becoming more efficient means increasing value, rather than reducing waste.
“As we further integrate and take advantage of machine learning and AI, however, we’ll realise that improving value requires predictive analytics. Predictive analytics allow simulations of the delivery pipeline based on parameters and options available so you don’t have to thrash the organisation to find the path to improvement. You’ll be able to improve virtually, learn lessons through simulations and, when ready, implement new releases that you can be convinced will work.”
"Business leaders often talk of the ‘time to value’ of investment in new projects,” comments Neil Barton, CTO at WhereScape, who also believes automation will be a big factor next year. “One of the trends we’ve seen through 2018 is how this has spilled over into how organisations are approaching how they leverage data. 2018 was the year that this spotlight was on automation and its associated efficiency benefits for IT teams.
“Further eliminating the manual, repetitive elements within the development process will be even more of a priority in 2019. As the speed of business continues to increase, organisations must shorten the time it takes to unlock the value of data. Automation does just that, and additionally enables companies to redeploy valuable developer resources away from routine data infrastructure management processes and onto value-add tasks, such as delivering new solutions and services that will better guide the business.”
In the HR world, the importance of implementing a more digital experience will be vital, reveals Liam Butler, Area Vice President at SumTotal, a Skillsoft company. He comments that: “Next year, HR departments will pay more attention to their candidate’s digital application experience. This will include making their application processes mobile friendly, in response to the increasing number of candidates who job hunt via this medium. To facilitate these changes, talent management systems will continue to grow in popularity, as HR departments find that they make managing the entire digital recruitment process much easier. This focus will also move past the recruitment stage to encompass an employee’s entire career, with training, retention, benefits and appraisals all seeing a digital shakeup.”
Meanwhile in the wider technology industry, Gijsbert Janssen van Doorn, Technology Evangelist at Zerto, predicts that convergence will be the latest trend. He explains: “2019 will see a new meaning come to the word ‘convergence’. In 2018 we saw hardware vendors trying to converge the software layer into their product offering. However, all they’ve really created is a new era of vendor lock-in – a hyper-lock-in in many ways. In 2019 organisations will rethink what converged solutions mean. As IT professionals increasingly look for out-of-the-box ready solutions to simplify operations, we’ll see technology vendors work together to bring more vendor-agnostic, comprehensive converged systems to market.”
“More enterprises will adopt a multi-cloud strategy to avoid vendor lock-in and enhance their business flexibility,” states Jon Toor, CMO at Cloudian. “However, a multi-cloud approach raises new management challenges that users will need to address to ensure a positive experience.”
Naaman Hart, Managed Services Security Engineer at Digital Guardian, discusses the lucrative business of email compromise. He says: "Companies will traditionally target their employees with security awareness training about not opening suspicious emails or links but how many train their staff to refuse a direct command from senior staff? The art of ‘Whaling’ aims to compromise a senior staff member’s email and then use that to instruct junior staff to make payments to bank accounts of fraudsters.
“It’s time that businesses thought about applying security to their business practices as IT security tools are not infallible against human behaviour. Malicious individuals are abusing the fact that junior staff implicitly trust their seniors and that they fear for their jobs if they do not act quickly as instructed. You must put in place processes and beliefs that when unordinary requests come through they should be questioned."
Lindsay Notwell, Cradlepoint’s Senior Vice President of 5G Strategy and Global Carrier Operations, explains how companies are likely to start migrating to wireless 4G to make 5G integration seamless. Lindsay believes: “As a prelude to 5G, just about every major carrier is busy upgrading their current LTE infrastructure to prepare for the more widespread rollout of 5G and – in the process – are providing gigabit-class LTE services. With more urban 5G services deploying in 2019 and gigabit-class LTE available on a nationwide level, I’m predicting that 2019 will be a breakout year when enterprise and public sector customers will start to ‘Cut the Cord’ and migrate their WANs to wireless 4G LTE connections that deliver game-changing levels of performance and integrate seamlessly with 5G when and where it’s available.”
Garry McCracken, VP Technology at WinMagic, states: “I predict that 2019 will be the year when we see the first serious hypervisor attack. Hypervisors and other cloud service provider-controlled infrastructure needs to be hardened to give security conscious enterprises the confidence that they remain in control of their data. One problem technically for Full Drive Encryption is that when running on a virtual machine with keys in the virtual memory, it’s possible that a hypervisor could take a snap shot of the memory of the virtual machine, and make a copy of the disk encryption keys. The solution is to use the hardware based memory encryption that not even a compromised hypervisor could access in plain text.”
For legacy technology, Mat Clothier, CEO and Founder at Cloudhouse, reveals how there are more tools available to solve the problem of migrating applications. He says, “As the Q4 2018 Forrester New Wave for Container Platform Software Suites reveals, external integration and application life-cycle management tools are becoming increasingly key to help build, connect, scale and operate container-based apps in public cloud environments, and this only looks set to continue into the new year. As more and more enterprises move away from legacy systems and towards a cloud-based future, they will realise that migrating traditional apps is challenging; there is a growing need for the tools that offer portability that may not be possible otherwise. 2019 will inevitably see more enterprise workloads move to Azure, AWS and Citrix, but what remains to be seen is how many businesses will realise the importance of tools that manage the delivery of these applications across a global network of data centres.”
It is apparent that there are many technology trends that we expect to see in the next year with challenges from cybersecurity, to maintaining customer satisfaction, becoming bigger and more widespread than we’ve seen before. But if companies make the right investments early in the New Year, they will have the best solutions and support in place to tackle whatever 2019 throws at them.
IP House has used Schneider Electric’s EcoStruxure™ for Data Centers Architecture, including its industry-leading modular UPS systems and Cloud-based Data Centre Infrastructure Management software (DCIM), to provide customers with services of the utmost reliability at competitive prices.
IP House is a specialist supplier of high-performance colocation data centre services. Located at the edge of London’s financial district and situated next to the Docklands, it is an independent and privately owned company, founded in 2016, whose main facility was previously owned by a telecommunications provider.
Since its foundation, IP House has refurbished the facility as a Cloud and carrier-neutral data centre with superfast Internet speeds ranging from 10Gb/sec to 100Gb/sec, making it the perfect high-performance solution for customers dependent on connectivity to business-critical applications.
As a colocation provider, whose target customers are in the finance, gaming and managed-services industries, IP House has to adhere to the highest standards of uptime, security and resiliency. It has designed and built the communications, power and cooling infrastructure at its London facility to provide a competitive and customer-focused colocation service.
“Uptime, security and service availability are our main priorities,” explained Vinny Vaghani, Operations & Commercial Manager, IP House. “With today’s businesses dependent on instant connectivity to critical applications hosted in the facility and a demand for technology that’s always available, our clients cannot tolerate any downtime or disruption.”
To offer high-end services while remaining price-competitive, the company follows a “pay as you grow” deployment strategy, scaling up to add infrastructure to satisfy customer demand. The 15,000 sq ft facility has two technical Data Suites, the first of which is now operational, with plans for the second suite to be brought online in the future.
Understanding IP Houses need for a scalable, resilient and secure data centre solution, APC’s Elite Partner Comtec Power worked with the company to deliver a retrofit and upgrade project at their London facility.
Working with Comtec enabled IP House to design and build the data centre to meet the requirements for Tier III classification. “Their insight and experience played a critical part in the company’s decision making,” said Vinny Vaghani, “ensuring that our future clients would continue to benefit from the most innovative technology solutions available in the market.”
“Comtec's expertise in design and build were unparalleled and invaluable.” he continued. “Their familiarity with data centre technology and proactive approach to customer support during the initial stages were one of the key reasons they were selected to work with us.”
“As the newest colocation market entrant we wanted to acquire and leverage both the experience and expertise of the partners supplying our critical infrastructure. We’ve always had a great relationship and single point of contact via Ian Gregg, Data Centre Specialist at Comtec Power.” Vinny continued. “This meant that at every stage of the project we were able to communicate our requirements with someone who understood our objectives from the customers perspective. That level of service was outstanding and is something they continue to provide us to this day.”
The data centre contains industry-leading infrastructure solutions including Schneider Electric’s Symmetra UPS, deployed in an N+1 configuration with 4x 500kVA to deliver rapidly scalable and resilient power options. In addition, IP House selected Schneider Electric’s EcoStruxure IT, DMaaS platform for 24/7 advanced monitoring, management and data-driven insights.
“From the design stages through to the deployment of the first pod, our focus has always been built around three core principles.” said Vinny Vaghani. “The first was to partner with industry-leading vendors, which ensures we deploy the most reliable and innovative technology solutions. The second was to gain accreditations that would reflect our commitments to uptime, security and resiliency. The third was to develop a reputation for customer service excellence.”
The company has deployed industry-leading hardware and software products from Schneider Electric’s EcoStruxure™ for Data Centers architecture, which is comprised of three levels; connected products, edge control software, and cloud-based apps/analytics/services.
IP House’s infrastructure deployment strategy is based around modular growth, utilising components of Schneider Electric’s EcoStruxure™ for Data Centers Architecture to scale up and add capacity as customer requirements demand it.
Suite A is comprised of 192 racks, all of which are Schneider Electric’s 48U NetShelter SX enclosures. These are larger than the standard industry size and allow the company to offer higher power densities and more scalability or space for IT equipment to customers. High-density racks maximise the use of available space in the data centre and also facilitate the use of containment solutions for efficient cooling operations, both of which result in energy efficiency and ultimately, reduced operating costs.
“Our market research revealed that demand among our target customers was for higher rack densities,” said Vinny Vaghani,. “IP House can provide peak power densities of 6kW/rack without making any modifications to our core infrastructure. We can maintain an average power density of 4kW per rack and by making changes to the local circuit breakers we can offer even higher power densities of up to 14kW/rack.”
When planning to upgrade the existing data centre, one of IP House’s main priorities was to prepare the facility for scalable, modular growth. “We needed a resilient power distribution system that would give us the flexibility to add capacity to Suite A and Suite B in two phases,” Vaghani continued. “As part of the retrofit, we deployed two Schneider Electric Symmetra PX UPS systems, which provide us with the perfect mix of modularity and scalability, while protecting our customers from downtime.”
Symmetra PX allows backup power to be added in smaller increments of 25kW, up to the maximum 2MW capacity that is envisaged for Data Suite A. Once operational Suite B will have its own independent plant room with a separate utility feed and an additional 320 racks, bringing the total to 512.
The power distribution system in Data Suite A also comprises Connected Products including transformers, rack PDU’s (power distribution units) and switchgear. The PDUs were chosen in part because they provide intelligent monitoring, outlet switching, tool-free mounting and has the ability to integrate with NetBotz sensors.
“The original facility had only a single power distribution system,” continued Vinny Vaghani. “As part of the retrofit, we deployed a truly diverse power feed via the addition of a secondary switchgear that mirrors the first. The new switchgear was supplied by Schneider Electric and installed with the expert partnership and guidance of its Elite Partner Comtec Power.”
Edge Control ensures advanced management
Another key aspect of the facility is Schneider Electric’s StruxureWare for Data Centers™, Data Centre Infrastructure Management (DCIM) software, which enables the infrastructure equipment to be monitored and managed from a “single pane of glass”, while providing seamless integration with the company’s building-management software.
IP House makes use of several Schneider Electric software modules and include Data Center Expert, which monitors alarm notifications and manages the physical infrastructure, reporting on availability, temperature, humidity and airflow to provide instant notification of any faults.
In addition, the company uses the StruxureWare for Data Centers, Data Center Operations Colocation Module to map the facilities physical infrastructure, plan future capacity, manage the network and map airflows to ensure sufficient cooling. Other benefits include the ability to model effects of changes in cooling equipment to the data centre environment and anticipate its potential impact on customers.
The final component of IP Houses software solution is Schneider Electric’s EcoStruxure IT; the Industry’s first vendor-neutral Datacentre Management as a Service (DMaaS) architecture, purpose-built for the hybrid IT and data centre environments. It provides global visibility from anywhere, at any time on any device and delivers data-driven insights into critical IT assets, which helps customers mitigate risk and reduce downtime.
Thanks to its deployment of industry-leading power and UPS (uninterruptible power supply) systems, high-density racks and DCIM (data centre infrastructure management) software from Schneider Electric, IP House’s new data centre is a highly resilient facility that meets the requirements of a Tier III data centre, as defined by the Uptime Institute. It is therefore ideally equipped to meet the high-availability service demands of the company’s key customers.
“When choosing the software platform we went through a thorough selection process”, said Vinny Vaghani. “We particularly liked the vendor neutrality aspect that Schneider Electric’s EcoStruxure IT system offers. We can integrate it with any equipment that communicates over standard TCP/IP protocols and manage the entire data centre using the same solution. Previously, the monitoring systems were completely separated, integrating them all into one platform gives us enhanced reporting and the ability to make data-driven decisions.”
In addition, the EcoStruxure IT Mobile Insights Application allows updates to be sent to mobile devices at remote locations, allowing the facility to monitored 24/7 by both staff and field by service engineers. Comtec Power remains linked to the IP House facility via the application and has the ability to monitor the same alerts in real-time, providing expert support in the face of downtime or an unplanned event.
IP House maintains that the reputation of Schneider Electric’s EcoStruxure architecture, coupled with the control and management features of the EcoStruxure IT platform provides a vital advantage when addressing the high-availability market that is its principal target.
“Our customers are always interested to know how we’re using Big Data analytics and advanced software to control and monitor the environment”, said Vinny Vaghani. “They know that our infrastructure is not only of the highest quality, but is proactively monitored and response is immediate in the event of an unplanned alert. None of which would be possible without utilising Schneider Electric’s EcoStruxure for Data Centers architecture. It’s been an important factor in our decision making and one that we believe will benefit both our customers and business, long-term.”
It’s all about the application, as never before.
By Kara Sprague, SVP and General Manager, ADC, F5 Networks.
For millennia, the world economy grew incrementally and slowly based on population growth and increasing trade across distances. Conversion of raw materials to finished goods was achieved through manual labour and processes – often through trial and error that could take centuries. After nearly 5,000 years of recorded history, the Industrial Revolution changed everything. Businesses that deployed factories and machinery, otherwise known as physical capital, achieved significant leaps forward in production. Productivity and output leapt forward, and the world got a little smaller.
By the 1900s, the explosion of service-based industries meant that for many businesses the measure of corporate performance shifted to people, or human capital. Today, we’re seeing another major leap forward as more and more organisations embark on a digital transformation of their business, and increasingly the value of the modern enterprise resides in its applications and data.
It’s not difficult to argue that applications are, in fact, the most important asset of the digital enterprise. Consider a couple examples: Facebook has no material capital expenses beyond $15 billion a year in computing infrastructure and just under 30,000 employees – but has an application portfolio valued at more than half a trillion dollars. That’s larger than the GDP of all but 26 countries in the world. Netflix has no material capital expenses and roughly 5500 employees – with an application portfolio valued at $175 billion. To put that in context, Disney, among the world’s most iconic brands, operator of massive theme parks, and owner of a vast media empire is valued less at $160 billion.
Prior to F5, I spent 15 years at McKinsey preaching to clients that an organisation’s most important asset is its people. No longer. We’re in the era of Application Capital.
Mid-size organisations generally have several hundred applications in their portfolio. Some large banking customers I’ve met with have upwards of 10,000. And yet most companies I ask have only an approximate sense of the number of applications in their portfolio. Ask them who owns those applications, where they are running, and whether they are under threat, and the answers get a little fuzzy. No doubt these same companies have invested heavily in the management of their physical and human capital, but unfortunately the same cannot yet be said for their applications.
The implications of this are staggering. Security, consistent policies, compliance, performance, analytics, and monitoring (to name a few) are each complex, expensive, and competitive issues for an increasing number of companies with apps spread across a dizzying combination of data centres, co-los, and public clouds.
In our latest customer research, nearly nine in ten companies reported using multiple clouds already, with 56% saying their cloud decisions are now made on a per-application basis. If you extrapolate, you can imagine hundreds of permutations in which companies’ apps have widely varying levels of support.
The implications leave many valuable corporate assets poorly supervised at best, and vulnerable to malicious attack at worst. Given the enterprise value attributable to applications, it won’t be long, in my opinion, before more companies finally start devoting a commensurate level of energy and resources to managing and monitoring their application portfolios.
Principles for an Application World
So how do we get there? When I talk to customers, I often focus on three core areas – principles to help them maximise the value of their application capital. These principles are neither unique nor inconsistent with how businesses managed capital in both industrial and services-based economies. The challenge is applying them, in the digital age, to the development and management of our applications. How do we take the rigor and discipline that have been ingrained in us around the management of physical and human capital and apply it to this new context?
1. Focus your developers on differentiation. In the realm of physical capital, manufacturers deploy that capital to create global supply chains with a precision and efficiency that becomes an asset for their business. In the digital age, this means the right people should be doing the right work to accelerate time to market for applications and maximise investments. Developers should be empowered to focus on delivering business value, unencumbered by concerns about availability, stability, security, or compliance.
2. Choose the best infrastructure for the application. Just as different occupations feature specialised work environments – consider chefs, architects, athletes – applications too have a natural habitat. One size does not fit all – work with the vendors and partners that best meet unique needs. Vendor lock-in is a thing of the past. Open architectures, APIs, and commoditisation of infrastructure now mean that customers have the power to choose an almost infinite mix of solutions, services, and even features to build, deploy, and support their application infrastructure.
3. Use consistent application services across your portfolio. Industrial companies managed regular maintenance of machinery and ensured the physical security of their factories. Services businesses invest heavily in HR and corporate wellness programs in order retain critical talent. Applications need services, too. However, services that support the delivery and security of applications can often add complexity and are applied inconsistently or not at all. Application services should be low-friction, easy to obtain, and efficient to manage across increasingly complex and sprawling application portfolios.
Application capital is already the primary driver of differentiation and value creation for modern enterprises. Yet few are devoting the appropriate level of energy and resources to managing and monitoring their application portfolios.
The effective management of this application capital is what will propel the next Amazon, Google, Microsoft, or Netflix. Not how many physical assets they deploy in their infrastructure, warehouses, or showrooms; nor even how many employees they amass. The real competitive differentiator will be found in their applications. Applications will drive the fastest growing revenue streams, creating significant shareholder value. Applications will drive community value as the most sustainable shared service. And most importantly, applications will attract the best talent, representing the most interesting and rewarding of work.
With 2019 just around the corner, this issue of Digitalisation World includes major coverage of technology predictions for the new year. Part 2.
The arrival of the information age had an unprecedented effect on consumers. With more power than ever, customers are aware of how important their custom is to businesses – and their loyalty is harder to win.
By Kevin Murray, Managing Director at Greenlight Commerce.
Recently we’ve seen a new evolution within this revolution – speed. Now, customers not only have access to more information than ever before, they’re also able to access it much faster.
And that’s only making consumer impatience grow. Impatience is an innate human trait, but retailers often don’t understand just how unwilling consumers are to wait for service. Combine that with growing expectations for the quality of customer experiences, and it’s time many retailers re-evaluated the technology they’re using to deliver these experiences.
Balancing risk and reward
It’s difficult to know exactly what the next ‘big thing’ in consumer expectations will be – and that means retailers need a solution stack that’s flexible enough to adapt to the changing customer experience landscape quickly, without requiring big-bang technology changes.
Taking on new technology always involves an element of risk. But in a diversifying commerce environment where new entrants are specialising in the niches that larger retailers often struggle to fill, the risk of losing out to your competition is bigger.
Luckily, the leading commerce platform and solution vendors are investing heavily in best-in-class capabilities to be able to offer a core set of commerce experience features.
Making the most of mobile
Without a doubt, mobile has had one of the most significant impacts on the ways consumers shop in recent years. The buying journey is changing, with customers doing more research online before they make purchases, whether they buy instore or stay online.
Mobile has given your customers the ability to look up product information, opening times, price comparisons and more near-instantaneously – so optimising your site for mobile is a must to ensure they’re getting high-speed access via your infrastructure.
As mobile devices add new capabilities with every new generation, there’s a growing number of device-native features you can take advantage of to connect with your customers. Rather than simply making an app that replicates the functionality of your website – as many retailers have – consider how you can improve the customer experience by using things like push notifications, location data and the device’s built-in camera.
A great example of this is Asos’ visual search tool, Style Match, that recently became available to all Asos customers. This tool uses artificial intelligence to allow customers to take a picture or upload an image and then and use the image to search through the site’s product lines to find similar suggestions. With 80% of Asos’ UK traffic coming from mobile devices, this is a great way to ensure customers are able to get to the items they want on a mobile device. It is all about that keyword: “discovery”.
The power of voice
It’s not just visual search that’s expanding – voice search is also increasing in popularity and sophistication as more and more consumers bring virtual assistants into their homes and workplaces.
However, the convenience for consumers brings a new challenge for retailers: how do you drive brand awareness without relying on the traditional banners, pop-ups and links of visual search? What do you need to do to get your products in front of people?
Part of the answer lies in upgrading the way your customers communicate with you through these assistants. Most voice-controlled apps currently rely on users saying pre-programmed trigger phrases – but to make the most of the voice search opportunity, we’ll need to see a shift toward AI-enabled apps that have natural language processing capabilities built in to help the process become more conversational.
With that in place, there’s more opportunity for a back-and-forth between your customers and your app, where the virtual assistant can intelligently identify and verbally share targeted offers with your customer base.
How you manage your data is as important as the data itself
Fundamentally, all of this is enabled by data – huge amounts of it. But often businesses simply have more data than they know what to do with, which means they’re not seeing the true value.
With a data management platform (DMP), you can not only consolidate your customer data from across the business into a single place, but bring in extra supplemental information from third-party sources. This gives you the power to analyse and produce reports about your customers’ behaviour for patterns to inform your segmentation – and use data about their attributes and preferences to help create highly targeted campaigns. One fashion retailer we worked with achieved an ROI of 469% thanks to hyper-focused targeting and audience analysis.
Consumers – particularly younger generations – are increasingly willing to part with their data in return for more personalised experiences. But as privacy concerns grow, too, and with regulations such as GDPR keeping how companies use customer data in check, you’ll need to be absolutely transparent with your customers about what information you’re using, how you’re using it, and who it’ll be shared with.
And even more importantly, you need to deliver on your promise. If you’re collecting customer data in exchange for more targeted, personalised experiences, your customers need to feel like they’re getting what they’re ‘paying’ for with their information. With a DMP, you can use third-party data and services to give them dynamic, programmatic advertisements that are based on their interests and behaviours.
Get the basics right first
To make all this happen, most retailers will need to make significant strategic choices about their IT infrastructure and the technology they use to deliver their customer experiences.
Though it’s understandable to feel pressure to implement the latest ‘big thing’ in customer experience, especially if your competitor has it, the most important thing is to be future-looking in how you choose the technologies – and the providers and vendors – you work with. Consumer expectations change constantly, so it’s vital you choose infrastructure that’s flexible, scalable, and can introduce new capabilities alongside your existing technology – with no need to rip and replace.
Your priorities should be managing and analysing your data effectively, identifying where new technologies can boost experience, and testing the business areas where automation and AI can give you a competitive edge. After that, you can continue to build your strategy towards boosting revenue, guaranteeing customer loyalty, and delivering new, innovative products and services.
With 2019 just around the corner, this issue of Digitalisation World includes major coverage of technology predictions for the new year. Part 3.
By Cloudera, the platform for machine learning and data analytics, optimised for the cloud.
2018 was a year of fundamental change. Underpinning this was the impact of data management and analytics and of course GDPR. Commentary from Cloudera looks back over the year and looks at six key tech areas for 2019…
We have created IoT solutions and edge networks that are far too gullible and trusting.
In 2019, security has to be the number one focus point for organisations to ensure the safety and efficacy of edge devices and networks accordingly. There are too many vulnerabilities and gaps in the security posture for IoT devices -- organisations must take a proactive approach to securing devices. Organizations must use the data, metadata, device logs - treating IoT devices like any other network device to predict and accurately respond to the available signals.
Context is the next major frontier in IoT.
More data islands have been created with IoT, we are now starting to bridge the islands but we don’t speak the same collective language. The ability to acquire data from disparate systems and align it on common ontologies so we can trust and utilize the data. The clockspeed for decision-making is increasing, while information expands exponentially underneath our feet. As AI and machine learning evolve, allowing these capabilities to organize the data, attribute it from a universe of observations, and produce auto-didactic insights, will give us opportunities not yet imagined. Lineage - “what did we know and when did we know it” will be a key capability that allows organizations to use data optimally.
Next year we will see further use cases of IoT in home spaces, smart cities and more industrial use cases in automation or autonomous vehicles. Technology ecosystems are forming so a holistic view of data across the cloud to the edge is important to maximise the benefit of the data used across these ecosystems. Cloudera can do that by making sense of the community and provide the value add and protect the data and the consumer by assuring governance and security.
The fines associated with non-compliance of the regulation are significant: up to 4% of annual global turnover or $20 million, whichever is greatest. Even if an organisation would not flinch at those kinds of numbers, the impact on their reputation would certainly get them to care about complying. GDPR to a large extend is about showing your customers and employees you are careful with their data, that it is used for the right purpose and that, ultimately, they have control. With that control also comes trust. And any organisation care about that.
Companies made personally accountable for how they treat privacy and personal data
Yes, it is true companies are now personally accountable for GDPR regulated data across the complete data flow, including partners that they need to exchange information with. That also makes it crucial for smaller organisations, suppliers to larger ones, to achieve and maintain their GDPR compliance as it becomes a competitive differentiator.
Big fines ahead for big tech and companies that fail to have adequate security
Data security is but part of GDPR, though an important one: organisations now have the obligation to notify the regulator within 72h of a data breach being discovered. The complete post-breach process including informing the affected individuals is now well defined and following it part of compliance requirements. Under article 25, data protection must be implemented by design and by default; security forms a natural part.
The effects on cloud computing
The effect of cloud computing is such that for organisations, it is important to ensure that the cloud services they use are compliant and that the systems and applications they design do not expose risk.
Do you think GDPR will expand and become a global regulation in 2019?
Expanding GDPR to become a global regulation is a certainly a potential further evolution. Already Cloudera customers and organisations that would not be subject to the regulation are taking it as their starting point for their own personal data privacy and protection guidelines. For it to become a truly global regulation though, it will first need to prove its worth in its current form; once that has progressed well and has proven workable, the chances of it influencing international practice will be much higher.
What organisations subject to GDPR are already realising though is that May 2018 was not the end of the process, the complete opposite. Creating compliance is one thing, but living compliance at scale quite another. What's more, GDPR in its current form may and likely will also evolve further. Organisations that build a solid foundation now, will be able to maintain compliance with less effort as the regulation evolves.
80% of all healthcare data is unstructured and for clinicians, doctors, nurses and surgeons, an incredible amount of insight remains hidden away in troves of clinical notes, EHR data, medical images, and omics data to understand patient records better. We are witnessing a revolution in the healthcare industry currently, in which there is now an opportunity to employ a new model of improved, personalized, evidence and data-driven clinical care.
To arrive at quality data, organizations are spending significant levels of effort on data integration, visualization, and deployment activities but organizations are increasingly restrained due to budgetary constraints and having limited data sciences resources.
Healthcare faces many challenges, including developing, deploying, and integrating machine learning and artificial intelligence (AI) into clinical workflow and care delivery. Having the proper infrastructure with the required storage and processing capacity will be expected in order to efficiently design, train, execute, and deploy machine learning and AI solutions. Cloudera is committed to supporting healthcare professionals and institutions to support the next stage of patient care and medical development.
4. Data Warehousing
Data Management goes Cloud?
As more organizations continue to see the economic and ease of use advantages of the cloud we expect to see increased investment in data management in the cloud. Data analytics use cases continue to lead the charge, especially for self-service, transient workloads, and short term workloads. Yet with new technologies that allow us to share data context (security models, metadata, source, and transformation definitions) we will see many organizations grow in use of cloud data management as more than just a compliment to on-premise models, as well as moving to private and hybrid cloud deployments, with greater confidence. New data types will continue to be required to satisfy business analytics, including social media and Internet of Things (IoT), driving the need for inexpensive, flexible storage best served by data management in the cloud. The cloud will also support emerging and new use cases such as exploration (iteratively performing ad-hoc queries into data sets to gain insights through discovering patterns) and machine learning without increasing IT resource demands, fueling further adoption.
5. Machine Learning
We are just at the beginning of the enterprise machine learning transformation. In 2019, we'll see a new step in maturity, as companies advance from PoCs to production capabilities.
Enterprise machine learning (ML) adoption will continue as businesses look to automate pattern detection, prediction and decision making to drive transformational efficiency improvement, competitive differentiation and growth. As early adopters advance from proof-of-concepts to production deployment of multiple use-cases, we’ll continue to see an emergence of technologies and best practices aimed at helping operationalize, scale and ultimately industrialize these capabilities to achieve full transformational value.
Infrastructure and tooling will continue to evolve around efforts to streamline and automate the process of building and deploying ML apps at enterprise scale. In particular, ML workload containerization and Kubernetes orchestration will provide organizations a direct path to efficiently building, deploying and scaling ML apps in public and private clouds. We’ll see continued growth in the automated machine learning (AutoML) tools ecosystem, as vendors capitalize on opportunities to speed-up time-consuming, repeatable chunks of the ML workflow, from data prep and feature engineering to model lifecycle management. Streamlining and scaling ML workflows from research to production will also drive new requirements for DevOps as well as corporate IT, Security and Compliance, as data science teams place increasing demands on infrastructure, continuous integration/continuous deployment (CI/CD) pipelines, cross-team collaboration capabilities, and corporate security and compliance to govern hundreds of ML models, not just one or five, deployed in production.
Beyond technology, we’ll see continuing demand for expert guidance and best practice approaches to scaling organizational strategy, skills and continuous learning in order to achieve the long term goal of embedding ML in every business product, process and service. Visionary adopters will seek to build an investment portfolio of differentiated ML capabilities and optimize their people, skills and technology capabilities to best support it. With our modern, open platform, enterprise data science tools, and expert guidance from Cloudera Fast Forward Labs, Cloudera is focused on accelerating our clients’ journey to industrialized AI.
As companies understand the value of cloud to their existing infrastructure and applications, choice will become increasingly important. The choice to have a mix of public cloud and on-prem as well as multi-cloud provides companies with the flexibility to choose a solution that best fits their needs. Any vendor that only offers one option and “locks in” a company will find their customers will be at a disadvantage. With this choice of deployment options, the need for a consistent framework that ensures security, governance, and metadata management will become even more important. This will simplify the development and deployment of applications, regardless of where data is stored and applications are run. This framework will also ensure that companies can use a variety of machine learning and analytic capabilities, working in concert with data from different sources into a single coherent picture, without the associated complexity.
These options are part of a larger move to a hybrid cloud model, which will have workloads and data running in private cloud and/or public cloud based on the needs of the company. Bursting, especially with large amounts of data, is time consuming and not an optimal use of hybrid cloud. Instead, specific use cases such as running transient workloads in the public cloud and persistent workloads in private cloud provide a “best of both worlds” deployment. The hybrid model is a challenge for public cloud as well as private cloud only vendors. To prepare, vendors are making acquisitions for this scenario, most recently the acquisition of Red Hat by IBM. Expect more acquisitions and mergers among vendors to broaden their product offerings for hybrid cloud deployments.
Months have passed since Devoxx 2018, but some of the conversations I had with the developers visiting our stand there still stick in my mind, probably because I found one or two of these talks so surprising.
By Jon Payne, database engineer, InterSystems.
One of the main topics for discussion was building cloud native applications. Of course, I had known that a build-your-own approach here was becoming popular, but I hadn’t appreciated the level at which it had permeated the entire development ecosystem.
Many of the developers I spoke to had a couple of things in common; they all came from organisations that owned huge volumes of data and they all needed to create a process involving both analytical and transactional workloads with that data using a variety of different technologies.
These analytical workflows were varied in nature. Many were finding that being able to run SQL, for example, wasn’t enough to solve the functional and non-functional requirements of queries in an adequate manner.
Consequently, a number of developers were building their own cloud native data management platform. What I found interesting was the number of different organisations feeling the need to do this given there is such a wide variety of cloud native – and also on premise platforms - out there that are well-known and are in the SQL and NoSQL space. Yet, they can find nothing on the market to suit their needs.
I find this remarkable because they may see it as the most cost-effective option to begin with, but, it is likely to turn out a much less economic option in the long-term. What these developers are building is very specific to a particular problem - and as far as addressing their challenge it is likely to be an initial success. However, there is considerable risk inherent in this approach.
All will be well if those developers building the solution remain with their organisation. However, if they decide to leave – and let’s face it the competition for developers couldn’t be stronger – then their existing employer either has to offer them more money, or face a knowledge vacuum surrounding the platform, possibly having to bring in expensive consultants to the rescue.
The other issue is a matter of functionality. Once the organisation wants to do something extra with the platform they will need to set up a data pipeline and replicate it in another data store, reconstructing it along the way. Before they know where they are, they have the same data replicated in four or five different structures. Suddenly, what started out as a cost-effective platform developed for a particular purpose has become both expensive and complex.
Interestingly, this was one of the reasons several developers told me they are not going cloud native. This ramping up of cost and complexity is not easy to manage. Besides, if you consider Amazon S3, AWS or any of the other low-cost cloud platforms, they are not fast mechanisms. It might take 20 seconds to do a read and there is no performance guarantee. If a business is data-intensive then it does beg the question as to whether cloud native is the right route.
This is especially the case if an organisation needs specific hosting requirements. For example, a healthcare company may need to be connected to N3 or HS3N. If so, the costs will rise dramatically as the data can’t be accommodated in large-scale AWS racks but must be kept apart.
Of course, there is a plus side, especially if a developer makes use of all available services offered by the cloud provider. This can significantly reduce the time to build and deploy a solution – and ensures it is highly scalable. However, this does tie the organisation to a particular cloud provider as, if a solution is built, for example, in AWS, it can’t be moved. Then as transactional values increase, data volumes grow and complexity intensifies, the costs can increase again quite dramatically.
Traditionally in the database market we used to talk about ‘asset compliance’. In the cloud world, because of the way the infrastructure works, providers are unable to offer this as we used to know it. Instead the big providers have redefined the concept so they can comply. While in some cases this may not be important, it can bring a whole host of issues to the fore when building apps that are critically dependent on the consistency and quality of data such as real-time clinical applications.
Yet despite all these drawbacks, developers are still building cloud native applications because they can’t find what they want on the market. This bodes well for solutions such as InterSystems’ IRIS Data Platform which, with its flexible architecture can meet varied transactional and analytical workloads and interrogate data in a number of different modes, not just in SQL or object or document-based.
What could also make IRIS so valuable in these cases is its interoperability; in particular, its ability to integrate data and applications into seamless, real-time business processes. It can also cut through the complexity, collapsing the tech stack and managing multiple open source components, all in a highly cost-effective manner.
Perhaps I shouldn’t have been so surprised at the number of developers at Devoxx building their own cloud native applications. After all, they are rather a self-selecting band, given the nature of the event.
However, the most exciting aspect here is not that that such a technically inquisitive and inventive group are doing it themselves, more that they are being forced to do so despite the shortcomings of cloud native because of a gap in the market. Which all means current new market developments are certainly moving in the right direction.
Over the past few years, opinions on AI have swung back and forth from “far more dangerous than nukes,” in the words of Elon Musk, to the key to a better world, with very little middle ground. However, when it comes to online identity verification, the future will most likely be a blend of AI and human review—known as “augmented intelligence.”
By Philipp Pointner, Chief Product Officer at Jumio.
What are the benefits of AI?
Think of AI as a way to throw a bunch of data at a wall and have the algorithms make sense of the data. The technology will uncover underlying patterns that are often unrecognised by the human eye. In machine learning, there is the concept of supervised and unsupervised models. In supervised models, the algorithms find patterns (and develop predictive models) using both input data and output data.
In supervised learning, the machine already knows the output of the algorithm before it starts working on it. The algorithm is taught through a training data set that guides the machine, and the machine works out the steps from input to output. With unsupervised learning, the system does not have any concrete data sets, and the outcomes are also mostly unknown. This technique is useful when you’re not quite sure what to look for.
Modern identity verification solutions use a combination of supervised and unsupervised machine learning to develop and refine models to perform three important functions: data extraction, fraud detection, and risk scoring. To expedite the learning curve and better train the ML models, leading identity verification companies are supplementing its ML with human review—making the algorithms smarter and faster and improving verification accuracy and their ability to spot fake IDs.
Speed is one of the key benefits that comes with AI. AI removes the strain of repetitive tasks for humans, reducing the time it takes to recognise an ID document, verify that an ID document is genuine and spot deviations that may signal fraud. The better the algorithms, the faster the verification process and the simpler the user experience.
This in turn prevents customer frustration and helps businesses reduce their abandonment rates and increase new account conversions. Jumio’s AI models have already reduced the average customer transaction time by 7 percent in 2018 alone.
Another benefit of AI is accuracy, and particularly with repeat tasks. When AI has a lot of data to leverage, it can create better models that have significantly higher predictive power. For example, in facial recognition, AI can implicitly learn and predict the normal distance between someone’s eyes. But the link between data volume and AI success rates cannot be expressed enough.
As Google's Research Director Peter Norvig succinctly puts it, when explaining why Google’s AI is so strong, "We don’t have better algorithms than anyone else. We just have more data.” At Jumio, we have the industry’s largest dataset of more than 130 million identity verifications, which gives the company a significant competitive advantage and head start in terms of developing, training and refining its AI/ML models. As with Google, building up so much data takes time—and the AI model continues to gain strength everyday as hundreds of thousands of verifications are continuously added to the ML engine.
Making ID verification accuracy even stronger: AI plus humans
How do you expand upon the strengths of artificial intelligence? Augmented intelligence—the combination of AI and human review—remains the most powerful model available.
Humans can look at one or two situations and generalise, without needing the millions of terabytes of data that is required of AI. Fraudsters constantly change their approach, so it’s important to be able to adapt quickly. Once human review identifies a new technique from fraudsters, new algorithms can be developed to ensure this is automatically picked up by the machines in the future.
Despite what many experts’ grand assumptions of AI would have us believe, there are many cases where humans are still smarter and can spot nuances that machines just can’t see. For example, AI cannot detect if there is a font mismatch unless it is explicitly trained to look for that use case. Humans who are highly trained in ID verification develop a “sixth sense” that often tells them that a certain ID card is “fishy,” hence triggering a deeper exploration. Once the root cause analysis is complete, this new fraud mechanism can be taught to AI algorithms, which then do an amazing job of looking for a similar pattern on an ongoing basis.
The role of regulation
When it comes to GDPR, the human interaction is vital. GDPR mandates additional considerations when the outcome of that verification results in an “automatic refusal of an online credit application or e-recruiting practices.” Fully automated verification solutions that fail to give the data subject the right “to obtain human intervention on the part of the (data) controller, to express his or her point of view and to contest the decision” are not allowed under GDPR, according to GDPR Article 22(3).
In short, businesses have to be able to tell the person why they were denied or rejected. Machines struggle to clearly explain this and provide deeper rationale for the verification decision, which is where expert human review comes in.
The future of AI and humans
It’s not a case of AI versus humans, but of the two working together in perfect harmony. AI and machine learning need data (and lots of it) to develop and refine algorithms. The more data, the better the predictability of the algorithms, and who is best placed to feed them this data and ensure ongoing learning from the machines? Humans.
In time, we could reach a point where humans can simultaneously learn from machines. With this symbiosis of intelligence eventually flowing between humans and machines, there’s no doubt that AI and humans are smarter together.
With 2019 just around the corner, this issue of Digitalisation World includes major coverage of technology predictions for the new year. Part 4.
James Wickes, CEO and co-founder of visual data experts Cloudview, has two predictions, one positive and one a major security concern:
On the positive side, I expect to see a growth in CPaaS - secure, cloud-based platforms which enable the intake, management and storage of data from multiple sources across multiple organisations. Once data is within a CPaaS platform, it can be integrated, analytics applied either in the cloud or at the edge, and results securely accessed by authorised users, again from any location. In my sector, visual data, there is tremendous potential to take data out of the silos in which it currently resides, combine it with other data and apply analytics in real time to create new applications. For example, you can enable security staff to identify issues more quickly if AI does the initial analysis and only presents them with unusual data, speeding up responses and making their jobs both less tedious and safer (e.g. they can see if tackling a problem requires more than one person). There are huge possibilities for ‘smart city’ applications in transport hubs, city centres, building security etc. We are already seeing early examples such as Vodafone Building Surveillance and Care Protect’s health and social care monitoring, based on Cloudview’s platform.
On the negative side, 2019 could be the year that we see a large-scale cyber attack on the UK’s critical national infrastructure. The proliferation of internet-connected devices in that infrastructure means there are more opportunities for cyber criminals to attack, and many of these devices are poorly secured, posing serious risks to individuals, businesses, utilities and ultimately national security. Experts have already identified that new smart energy meters, which the government wants installed in millions of homes, will leave householders vulnerable to cyber attacks. Cyber criminals could artificially inflate meter readings, making bills higher, but ultimately this could lead to a catastrophic attack on our electricity grid. The National Grid was put on alert in March 2018 by officials from the National Cyber Security Centre amid fears of a Russian cyber attack and given advice on how to boost its defences to prevent power cuts.
CCTV will continue to be a major concern in 2019 because it’s both widespread and has largely been ignored by organisations, perhaps because it’s often outside the remit of the IT department. The recent warning about a Chinese CCTV manufacturer is just the latest in a number of such alerts. Unless manufacturers embed better security into their connected devices, we will see large scale attacks on our national infrastructure become the norm. The ultimate extension of this is that we may see the government give citizens instructions on how to prepare for a massive cyber attack. Security cannot be confronted by the government alone, so we will need to come together as a society and understand that we all have security responsibilities. Changing default passwords on IoT devices, from baby monitors to CCTV cameras in homes and businesses, is one measure that everyone should be taking now.”
Jason Hart, CTO of Data Protection and Gary Marsden, Cloud Security Solutions at Gemalto have put together their biggest predictions for next year, including:
Looking Forward: IT Security Trends for 2019
By Andy Samsonoff, CEO of Invinsec.
Protecting an organisation from cyber crime is a relentless task, as both security solutions and means to attack continue to evolve. The repercussions of a security ‘incident’ can be costly, in terms of financial loss, data recovery and damage to reputation.
As we near the end of 2018, many of us will be looking ahead to 2019, identifying what developments may impact our personal and business security, and how we can best prepare for them. We have therefore drawn upon our extensive knowledge of IT security, to bring you our predictions for next year’s cyber security landscape.
Nature of attacks
Threats to security come in two distinct guises: deliberate and accidental. It may be tempting to only consider threats coming from ruthless cyber criminals with sophisticated software, but the reality is that much damage can be done via carelessness. A sloppy approach to securing hardware, or having predictable, shared passwords, makes organisations extremely vulnerable and easy to compromise.
That said, even the most robust security infrastructure can be subject to attack. The means and motives for committing cyber crime are evolving, as criminals find new ways to bypass or break into systems. Below are five key areas to consider:
This carries two inherent risks:
#2: Phishing attacks
The practice of Phishing often using email, will continue to be a problem. It is used to extract personal or company data, usernames or passwords to gain improper access often through the insertion of malware using bad links or documents.
#3: APT’s (Advance Persistent Threats
These are complex, multistage attacks designed to collect data (social engineering), download more malicious code, move through and around your network, make initial infections and extract data with the ability to silently leave your network(s).
#4: Cloud Application and Data Centre Attacks
The ability of having faster and more reliable internet connections has allowed for the growth and expansion of cloud applications and cloud data centres. With every new application that moves to the cloud, it requires you to trust another vendor, their software and their security to protect your information.
The inherent risk is that users can access applications as well as your data from almost anywhere as long as they have the user’s credentials. It becomes a bigger risk when those users connect to free or public wi-fi.
#5: Shadow IT Applications
We are going to see an increase in shadow IT applications being used, we can see that over the next few years these applications are going to cause serious damage. Industry professionals sometimes refer to them as renegade applications, where employees download non-corporate-approved (and potentially insecure) applications to the same devices used to access company data. Companies should consider whitelisting applications and restricting the ability to download new software.
And one for 2020:
Predicted security trends for 2019/20 show that AI is poised to help forecast, classify and potentially block or mitigate cyber threats and attacks.
One fundamental idea to AI is machine learning. Over the past few years it is being incorporated into many security applications. Machines will battle machines in an automatic and continuous learning response cycle and this is will continue to enhance security postures.
Don’t be scared, protect yourself
As the opportunities to commit cyber attack evolve, so must your security provision. Expert advice, round-the-clock monitoring and competent recovery software will offer the best chance of achieving ‘business as usual’ in the event of a cyber attack.
There’s nothing more frustrating to an employee than setting out to work on an important project, only to find that a crucial business application isn’t responding the way it should. Even worse is calling the IT help desk for assistance, and discovering that the issue is widespread and all IT can do is create a ticket, and transfer it to another team to solve.
Beyond individual frustration, overwhelmed IT staff can have a significantly negative impact on the wider business. When IT issues drag out over the course of days or weeks, employee productivity and overall morale drops while the cost of doing business and revenue loss both increase. In fact, the average employee spends a whopping 22 minutes a day dealing with IT-related issues.
With so many strategic goals competing for the CIO’s attention, maintaining IT stability shouldn’t be competing for the top slot. Below are three tips to help CIOs and IT leaders improve user productivity and ease the burden on the help desk while maintaining focus on the larger company-wide strategic goals.
Focus on user experience rather than on incident tickets
Organisations that can proactively keep tabs on their enterprise environments and the efforts of IT, and can receive regular employee feedback about those efforts, are likely to maintain successful IT departments – while preserving (or even increasing) employee productivity by cutting down the number of issues they deal with.
Using automated platforms, the IT department can monitor the IT ecosystem, identifying issues and allowing IT staff to fix them proactively. For example, they can detect whether employees are using a workaround app because it’s not working correctly. IT can then reach out to users and let them know that they’re working on a solution that will make employees’ work more effective. This is the type of great IT support that organisations can expect when IT uses automated platforms that leverage real-time end-user data to better understand the problems employees are facing.
To illustrate, a recent study conducted by Digital Enterprise Journal found that 41 per cent of organisations taking a proactive approach to IT – by empowering both employees and IT staff to be more collaborative – improved success rates in preventing performance issues before employees were ever affected. The study also found that these enterprises are enjoying greater IT success when they shift to monitoring IT services from the end-user perspective with proper business context, and 58 per cent of the survey’s respondents noted that monitoring IT performance from the user perspective is a top strategic goal for their IT transformation.
Use AI to extract vast amounts of insight from small PC devices
Strategic-thinking CIOs and their IT departments are choosing to use advanced analytics and artificial intelligence to comprehensively monitor and assess the user experience of devices and applications across the enterprise in real-time.
Leveraging the power of advanced algorithms, organisations can take a more advanced approach to IT by using end-user performance data to identify IT-related problems automatically and generate associated IT tickets. In addition, the right AI-based tools can remediate minor issues as they pop up without any human intervention at all, freeing IT staff to tend to only the most pressing concerns, or better yet – to focus on more strategic and value-add assignments.
This allows IT staff to stay ahead of different hiccough with devices and applications across the organisation by giving them the context needed to make the best decisions possible and remediate issues quickly (sometimes automatically) and with ease.
Remove end users as a critical step in the IT support process
There’s no doubt that system failures will continue to occur as technology becomes ever-more complex and people and industries becoming more reliant on it to perform almost every function of everyday life. But by making use of automation and the right data, IT is able to be more proactive, becoming vastly more efficient. The result is an IT department that can get ahead of issues and work – and employees that aren’t inconvenienced when IT-related problems arise.
Artificial intelligence and machine learning algorithms can sift through data and find issues faster than a human could ever hope to, which is why successful organisations are choosing to add AI into the mix to bolster the abilities of their IT staff and take the end-user out of the process. By letting the algorithms do what they’re best at, businesses can reduce the overall number of IT support tickets, eliminate employee frustration related to IT disruptions and most importantly, ensure employees across every department have devices and applications consistently available so they can do their jobs effectively.
When IT departments are empowered to be champions of the employee experience and given the tools to succeed, amazing things can happen within the organisation. Greater employee productivity leads to reduced operational IT spending because teams are able to focus on the most pressing concerns, as opposed to being bogged down with support tickets and placing employee issues in a queue to be dealt with later.
Automation, though still relatively new for business applications, has the power to bolster the abilities of IT staff across the world if the right dynamic is implemented. With the power of AI at the fingertips of IT departments, they’ll elevate their service levels to new heights and, more importantly, encounter fewer frustrated end users and less enterprise-wide impact – and help organisations win back their productivity.
With 2019 just around the corner, this issue of Digitalisation World includes major coverage of technology predictions for the new year. Part 5.
In 2019, trust will be both essential to succeed and overwhelming to navigate. Here are five predictions on the role of trust in 2019, says Ojas Rege, Chief Strategy Officer at MobileIron:
These are all big challenges, as is to be expected with early-stage technologies that have such disruptive potential. Can the technology industry, however, truly walk the talk of trust? As Albert Einstein said, “Whoever is careless with the truth in small matters cannot be trusted with important matters.” We will see, in 2019, the first generation of AI startups for whom trust is a credible, long-term, deeply held value instead of a marketing tagline. That is when we’ll have a sustainable path for Artificial Intelligence to truly transform our lives.
Mike Puglia, Chief Product Officer, and Jim Lippie GM of Cloud Computing at Kaseya, talk compliance and SMBs.
As Compliance Grows, Data Governance in 2019 Will Evolve to a Company-Wide Initiative
What we have been doing over the last 18 months when it comes to security simply is not working. The Equifax hack in September 2017 changed everything – basically every U.S. citizen had their data compromised. To combat this type of national-scale hack, we are seeing more worldwide regulations go into effect – GDPR being the big one. But attacks are still happening and we have achieved maybe 20 percent of overall GDPR compliance, at the most. It will be in 2019 that the compliance conversation moves to data governance – a top down initiative that starts at the board level, or business-owner level for SMBs. Businesses benefit from data governance because it ensures data is consistent and trustworthy. As a direct result of the backlash from failed security measures from the past 18 months, stringent compliance demands on global businesses will force stricter data governance in the new year, starting at the top.
2019 Will Bring a New Security Game Plan for SMBs
Data breaches and security threats will not let up in 2019. As SMBs realize even more how vulnerable they truly are, especially given that at practically any point and time their data could be held ransom – effectively putting them out of business, they will finally take security measures to heart. SMBs, like their large enterprise counterparts, will lockdown on a comprehensive backup and disaster recovery solution to protect their business and ensure continuity, while also forming new standardized, block and tackle game plans to keep the business even more safe.
A range of comments from NTT Security experts:
Kai Grunwitz, SVP EMEA, NTT Security:
“Artificial Intelligence (AI) is no longer the stuff of science fiction films. It’s already here, driving a Fourth Industrial Revolution which promises to radically reshape the world and society we live in. While there is plenty to get excited about, AI is not a silver bullet and we will see it continue to create a false sense of security in 2019. AI will be highly relevant for a security strategy but it will not, and should not, be the holy grail for cybersecurity. There are many more prosaic, tried-and-tested tools and processes, which are just as important, if not more so, to helping organisations mitigate cyber risk. Without the items like identity management, data protection, network segmentation or patching, all AI driven security detection strategies will fail to protect critical data.”
Terrance DeJesus, Threat Research Analyst, NTT Security:
“In 2019, cryptomining functionality will be commonly observed in multiple malware variants and types. Cyber attack campaigns, especially those for financially motivated threat actors, will include some type of cryptomining malware, in many cases being just one of several dropped binaries on victim PCs. Researchers investigating such breaches may start to use detection of cryptomining malware as a sign of a larger intrusion and compromise as threat actors carelessly drop cryptomining malware on systems as they push toward their actions on objective. A large use of publicly available cryptomining software such as XMRig and CoinMiner, will continue to be altered and used for campaigns, while the industry is introduced to more advanced cryptominers leveraging Living off the Land (LotL) – tactics, techniques and procedures used by threat actors to only rely on native software, processes, commands and more when compromising and leveraging a system. What is being mined will be based on cyber black market coin type acceptance, anonymity features, and the cryptocurrency market.”
Lawson Davies, Senior Solutions Architect, NTT Security:
“Shadow brokers, Wikileaks, Edward Snowden, Eternal Blue, WannaCry… nearly every one of these people, exploits or groups is a household name today. They all have something in common too – insider threats. Without insider threats who breach – and leak – sensitive information, none of these names would have seen nearly as much success. As such, we will see a move to a continuous authentication strategy enabled through User and Entity Behavior Analytics (UEBA). Although this has been talked about for a couple of years now, UEBA as a response to insider threats will become mainstream in 2019. It will address issues around credentials being vulnerable and how biometrics can be easily bypassed.”
Mark Taylor, Managing Consultant, NTT Security:
“Data controls will become more pressing especially relating to unstructured data. Driven by regulations such as the GDPR and the regulatory near norm of data breaches occurring, understanding where information is stored, processed and communicated is becoming increasingly more important to organizations. Add to this the advantages that solutions such as AI are introducing to industry which ultimately will drive the Smart world and cities we live in tomorrow, it is easy for businesses and innovators to focus efforts on how this evolves. One aspect of an organization’s information and data flow which holds perhaps a more significant risk, is the unstructured information and data used in daily processes or business habits which my involves the downloading onto mobile devices or carrying to be read whilst travelling or working remotely. This coupled with the need to manage cross border controls, understanding where data is in the organisation, manage retention, secure deletion, access control records, subject access requests and support transfer requests as examples; creates an opportunity for innovators to create solutions which support organizations’ efforts to keep control over unstructured data.”
Kunal Hatode, Senior Manager EMEA Advanced Cyber Security Services, NTT Security:
“Credential theft rom phishing attacks isn’t new but it doesn’t make it less of a target for cyber criminals. We all know about phishing being the primary vector for delivery of malware but, in 2019, bad actors will be more interested in gaining access to systems by deceiving people into giving their system credentials unknowingly. As more and more services are moving to the cloud, these bad actors then don’t have to enter the victim’s network, but just access the ‘crown jewels’ in the cloud. The same applies to SMiShing attacks – while not new, they will be on the rise. Hackers will gain these credentials via SMS texts to deliver links that lead to malware that collects stored credentials in mobile devices.”
It is well known that robots will replace some roles, but what is less discussed is how automation will also change many traditional human jobs and workplaces.
By Adam Greenwood-Byrne, CEO of RealVNC.
Human workers have become far more effective, creative and emotionally intelligent. Machine learning will improve human decision-making and the rising use of remote access technology will enable humans to influence faraway people and events by being ‘present’ in multiple devices and locations.
The increasing use of these technologies to analyse, identify and respond to changing trends in real-time will make human workers digitally omnipresent across departments, businesses, factories and devices. The ability to have foresight on processes will consequently change many roles from reactive and routine-based to analytical and strategic. While machines take over routine tasks, humans will be able to focus more on having a bird’s-eye-view on whole processes.
Here are five ways that the human workforce will be transformed by emerging technologies in the next decade:
1. Workers will be digitally omnipresent
Technologies such as the ability to ‘remote in’ to a wide array of devices – from smartboards to industrial robots – will break down traditional silos of expertise and allow organisations to share live data and insights across locations. Workers will be able to work remotely more effectively and be in multiple places at once.
Remote access technology has already allowed customer service agents to remotely access and fix smart TVs, and even enable technicians to remotely re-calibrate medical imaging equipment on cruise ships or connect into satellite control stations.
Now the technology is being extended beyond technical support to other functions, from virtual training to cross-departmental oversight and collaboration. Companies could increasingly deploy remote access technology to allow one department to ‘remote in’ to interactive white boards or Augmented Reality headsets in other departments and collaborate on multinational, cross-departmental projects. Legal teams can remotely access staff devices in any location to oversee and enforce compliance. In future, business customers will even be able to access remote, overseas machines to conduct virtual training on real equipment in training labs.
The rise of ubiquitous remote access technology will mean every department, from legal to HR to accounts, will be able to more easily teach and learn from other departments, turning organisations into more tight-knit, cohesive units. This is set to unite organisations, bringing together business units, departments and divisions in a way never previously possible.
2. Organisations will be more transparent and workers multi-functional
Staff will also be able to fulfil more cross-sector, strategic roles. This will be facilitated by technology that will enable employees to ‘remote in’ to a wide range of other departments, divisions and devices and ‘be where they are not’.
The expansion of IoT to encompass much of the modern workplace – from the factory floor to the office – will see the deployment of remote access technology as a tool for everything from remote industrial maintenance to cross-departmental ‘virtual training’. Remote access will give managers the ability to ‘remote control’ a vast industrial IoT assembly line, giving industrial managers the power to influence and oversee entire end-to-end supply chains in real-time. This will be combined with Artificial Intelligence systems that will autonomously monitor production processes and aid predictive maintenance, enabling workers, such as engineers and technicians, to make smarter and more strategic technical interventions.
Organisations will become more transparent when people and data from every department and division are instantly available in other places. If HR or legal staff can ‘remote in’ to business travellers’ devices to monitor expenses claims for compliance, it could make corporate hospitality more transparent and even reduce potential problems like bribery.
3. IT staff will become tech strategists
Remote access technology will transform the position of IT manager from a purely technical role to one with responsibility for bringing together people and data from all departments in pursuit of wider business objectives. Data will power the digital economy and the most important people in companies will be those that understand how to extract commercial value from data to fuel business processes. This will elevate IT managers from pure ‘techies’ into ‘technology strategists’ responsible for the sharing of information across departments to drive innovation.
IT managers will also be responsible for organising and storing the company’s data in a way that makes it commercially valuable and useable to train the machine-learning algorithms that fuel future business processes. This has the potential to transform the role of IT manager into a board-level one.
4. Customer service agents will be transformed by AI and remote access
Other human roles will dramatically expand, too. For example, developments in AI and remote access are set to transform the human customer service agent into a more wide-ranging and effective role. Customer service agents will have much greater responsibility and influence as remote access technology allows them to enter any connected object, from objects in homes to hospitals, without physically being there. Future call centres will be composed of multi-skilled teams required to perform complex and multi-faceted tasks in each industry sector.
Customer support agents or technicians might be required to remote in to and fix a smart traffic light, combat a live cyber-attack on a train, or fix IoT equipment on an Industry 4.0 factory floor. Customer service agents will increasingly be educators, salespeople, and consultants, required to ‘remote in’ to customers’ homes or businesses and provide training and maintenance or up-sell new products.
Future customer service agents will have the opportunity to expand their skill-sets as they work with a range of technologies and machine-learning systems to augment key customer service tasks, such as fixing technical problems. They may even extract useable insights from data that an AI can use, by using an app to conduct automated sentiment analysis of customer queries.
A ‘virtual’ customer service agent and a human call-centre worker will in future work together to solve more problems at greater speed. For example, a machine-learning system could identify real-time patterns in customer service requests, such as more people reporting cyber-attacks in a certain location, enabling human technical support staff to more effectively target their resources. This will create more efficient and effective customer service agents with greater insight and foresight.
5. Finance and legal teams will have more power
Similarly, finance and legal managers will have the opportunity to be far be more strategic, influential and make their outputs effective. Finance managers will be able to ‘remote in’ to a PC in accounts retrievable and legal teams will similarly be able to access a PC in another department to assist with compliance issues in real-time. Machine-learning algorithms will be able to monitor data to identify patterns (such as people seeking certain types of legal support after a recent legal change) that will enable them to predict and prevent potential problems.
With greater power, they will have more responsibility for policing compliance with everything from data-privacy laws to new financial regulations.
The outcome will be that they will have increasingly cross-sector roles and responsibilities and greater power and influence. Just as IoT has merged previously separate sectors such as the aerospace and software industries, it will now merge previously separate business departments and divisions. Traditional silos will be dissolved, and knowledge will spread further and faster than ever before. The ability for humans to be digitally omnipresent across all connected devices will give everyone from company lawyers to finance managers greater power and influence, and foster closer cross-departmental collaboration and learning.
It isn’t news that DevOps and IT security teams often struggle to align their departments and maintain a coherent balance between keeping a business secure and developing new applications to maintain customer interest. While security processes are a necessity, they can be deemed by DevOps teams to be manual and cumbersome, blocking the agility that makes them so effective in bringing their solutions to market. IT teams conversely feel their counterparts are prepared to sacrifice security in the name of innovation and revenue.
Even if both teams do respect the other’s intentions, any conflict could lead to delays in both of their processes. For example, an IT team may need to make crucial updates to the network security and warn different teams they may experience some downtime during this crucial implementation. However, DevOps have typically been given more leeway in how they operate as they are so important in today’s software-driven world, and may ask for the update to be delayed so they can complete tasks or meet deadlines, leaving the IT team waiting and losing time rescheduling their own work.
This has unfortunately led to a myth that DevOps teams choose to ignore security. In reality, developers are keen to know that their apps and the environment they work in are secure – but at the same time, they don’t want security to get in the way of them quickly delivering valuable new products and software features.
So, is there a way for DevOps teams – one of the most important resources in many modern businesses – to embrace security without impacting agility? Can the integration of DevOps and security be done in a way that alleviates tensions and promotes collaboration – while actually improving both security and agility in the process?
Yes. The secret is automation.
Reconciliation through automation
As C-suite executives are now more likely to focus on security, due to the obvious financial and reputational consequences of a breach, DevOps teams should define how they protect and secure their multiple projects and production environments. Automating security as part of the CI/CD process allows DevOps teams to easily follow company security policies because they will be embedded into the automation pipeline.
This process can remain running with little concern, effectively minimising stress about security. This still automates policy changes and activities so that there is a significantly reduced chance of error. Although the automation solution remains hidden, it can still be utilised at any point to view data on the vulnerabilities, compliance requirements, security policies and network connectivity, via its continuous scanning abilities.
Additionally, DevOps teams are already familiar with automated tools in their daily operations and communications – and they are likely to be accepting of switching to a security solution that integrates with their existing processes.
Automation is the key to creating reliable, effective and connected “DevSecOps” teams, as it makes the secure option the easy option. It combines DevOps’ existing use of automated tools to achieve their ultimate goal of continuous, on-time and on-budget deployments with security’s focus of reducing human error and maintaining continuous visibility into potential vulnerabilities.
A guiding principle of DevOps is collaboration, which is often equated with the idea of shared responsibility. To successfully embed security into the DevOps process, security teams and developers must work together and establish shared responsibility. But how?
Some organisations may assign a security representative in each development team. This person acts as a pivotal link between the two teams – improving communication and building a balanced process that considers everyone’s mutual interests. A continuous flow of knowledge sharing among both teams ensures a level of maturity that allows a business to secure applications and services with an automated solution.
Security teams can begin to define “guardrail policies” that allow development teams to deploy continuously, with the caveat of having to obey security and compliance policies. This is critical for both teams. This new way of working means developers will be able to test their security posture at every step in the CI/CD pipeline and correct things when necessary, and security teams can comprehensively ensure security and compliance throughout the development process.
Any belief that there is common discord between DevOps and IT security teams is unfounded. While it cannot be denied that both teams affect each other, this is not due to conflict – it’s due to business needs. If the two teams work together, they can both achieve their goals and be part of a secure, innovative and profitable organisation. The first step is to accept collaboration is a necessity and by embracing security instead of being concerned by it, DevOps teams can stay in control of how their needs work around IT teams’ processes. Then, an automated security solution can be deployed to improve the efficiency and outcomes of both departments – and, in turn, the entire organisation. It’s time for DevOps to embrace DevSecOps.
With 2019 just around the corner, this issue of Digitalisation World includes major coverage of technology predictions for the new year. Part 6.
Radware’s network and application security specialists think four major events will occur in 2019 and that CSOs need to prepare for a dirty dozen of attack types.
1. The public cloud will experience a massive security attack that shakes the confidence of all users
The shift to use public clouds to support digital transformation has created the biggest and most urgent security problem CSOs face. Business applications from web analytic platforms and domain name services, through apps stores and marketplaces, to mission critical ERP will all come under intense fire as hackers look for a way in, motivated by greed, a social cause or politics. We are likely to see an attack of such scale in 2019 that every business will be forced to re-evaluate their use and security of the cloud.
2. Ransomware hijacks targeting the Internet of Things (IoT)
Ransomware will remain popular with hackers and will be joined by two newer forms of ransom attacks - hijack ransoms, which make a service unavailable until a ransom is paid and IoT device ransoms, which could force not just companies but also individuals to pay on the spot ransoms to regain control.
In particular, Radware expects to see scenarios where health authorities are paralysed - cloud-based monitoring services or devices used to administer drugs could be used to establish a foothold in the delivery of care and in turn create national emergencies in a worst-case scenario.
3. The rise of the nation state availability-based attacks will accelerate
Radware expects to see Nation State activity capitalising on uncertainty. Organised groups will create widespread disruption, either as solo endeavours or in conjunction with armed conflicts.
Communications systems, the backbone to life and trading, will be a target. We are likely to see attempts to bring about multi-million dollar loses. Expect more governments to be embarrassed, shamed and manipulated, as well as face physical disruption to services in 2019 too.
4. DDoS swarmbots and hivenets will come of age
Based on developments seen on the dark web, Radware predicts that cybercriminals will upgrade IoT-based botnets with swarm-based technology to create more efficient attacks. Swarmbots will turn individual IoT devices from ‘slaves’ into self-sufficient bots, which can make autonomous decisions with minimal supervision, and use their collective intelligence to opportunistically and simultaneously target vulnerable points in a network.
Hivenets take this a step further and are self-learning clusters of compromised devices that simultaneously identify and tackle different attack vectors. The devices in the hive can talk to each other and can use swarm intelligence to act together, recruit and train new members to the hive.
When a Hivenet identifies and compromises more devices it will be able to grow exponentially, and thereby widen its ability to simultaneously attack multiple victims. This is especially dangerous as we roll out 5G as hivenets could take advantage of the improved latency and become even more effective.
Radware therefore believes CSOs need to be prepared for a ‘Dirty Dozen’ of attack types:
#1. Advanced persistent threat or APT
#2. Organised cyber crime
#4. DDoS Groups
#6. Patriotic hackers
#7. Exploit kits
#10. Insider threats
#12. Consumer tools
Nik Whitfield, CEO, Panaseer, comments:
“The two main objectives for cybercriminals are typically to steal or break something. This might be stealing personal information or intellectual property, and breaking things such as websites and services. However, in 2018 we saw a continued growth in good old fashioned state sponsored propaganda, through widespread influencing campaigns over social media. The Internet and social media are just so good at scaling propaganda that it’s hard to resist.”
“In 2018 the world didn’t experience any mega-ransomware attacks, such as with WannaCry and NotPetya in 2017, but it’s important to note that cybercriminal and nation state hacking groups haven’t disbanded – the truth is that if they are meeting their aims using covert tactics and existing techniques, why deploy new ones? Instead they have likely been working to develop new techniques, which they can deploy as defences are strengthened.
“Organisations need to be prepared for more sophisticated attacks in 2019. As no company can be 100% secure there must be clarity on acceptable levels of risks and investment in the fundamentals of cyber hygiene – knowing, on any day, what assets you’re protecting, how they’re controlled, and how they’re vulnerable - will crucially help protect against the vast majority of future attacks.”
Corey Nachreiner, CTO at WatchGuard Technologies, talks vaporworms, global internet disruption and rogue Ai chatbots for 2019:
Cyber criminals are continuing to reshape the threat landscape as they update their tactics and escalate their attacks against businesses, governments and even the infrastructure of the internet itself. The Threat Lab’s 2019 predictions span from highly likely to audacious, but consistent across all eight is that there’s hope for preventing them.
The WatchGuard Threat Lab’s 2019 Security Predictions are:
1. Vaporworms or Fileless malware worms will emerge
Fileless malware strains will exhibit wormlike properties in 2019, allowing them to self-propagate by exploiting software vulnerabilities. Fileless malware is more difficult for traditional endpoint detection to identify and block because it runs entirely in memory, without ever dropping a file onto the infected system
2. Attackers hold the Internet hostage
A hacktivist collective or nation-state will launch a coordinated attack against the infrastructure of the internet in 2019. The protocol that controls the internet (BGP) operates largely on the honour system, and a 2016 DDoS attack against hosting provider Dyn showed that a single attack against a hosting provider or registrar could take down major websites.
3. Escalations in State-level cyber attacks force a UN Cyber Security Treaty
The UN will more forcefully tackle the issue of state-sponsored cyber attacks by enacting a multinational Cyber Security Treaty in 2019.
4. AI-Driven chatbots go rogue
Cyber criminals and black hat hackers will create malicious chatbots on legitimate sites to socially engineer unknowing victims into clicking malicious links, downloading files containing malware, or sharing private information.
5. A major biometric hack will be the beginning of the end for single-factor authentication
As biometric logins like Apple’s FaceID become more common, hackers will take advantage of the false sense of security they encourage and crack a biometric-only login method at scale to pull off a major attack.
6. Hackers to cause real-world blackouts as targeted ransomware focuses on utilities and industrial control systems
Targeted ransomware campaigns will cause chaos by targeting industrial control systems and public utilities for larger payoffs. The average payment demand will increase by over 6500 percent, from an average of $300 to $20,000 per attack. These assaults will result in real-world consequences like city-wide blackouts and the loss of access to public utilities.
7. A WPA3 Wi-Fi network will be hacked using one of the six Wi-Fi threat categories.
Hackers will use rogue APs, Evil Twin APs, or any of the six known Wi-Fi threat categories defined by the Trusted Wireless Environment Framework to compromise a WPA3 Wi-Fi network, despite enhancements to the new WPA3 encryption standard. Unless more comprehensive security is built into the Wi-Fi infrastructure across the entire industry, users can be fooled into feeling safe with WPA3 while still being susceptible to attacks like Evil Twin APs.
Instead of taking on every issue with equal gusto, we would all be well advised to focus on the battles that are most solvable and which outcomes will have the greatest impact. The concept of choosing one’s battles is particularly important for IT teams in an era of digital transformation.
By Alex Teteris, principal technology evangelist, Zscaler.
As organisations embark on digital transformation projects, it is clear that the technology landscape is constantly evolving. The pillars upon which business IT systems were built 30 years ago are rapidly fading and organisations are now pushing for IT to change. Technology has shifted from an IT issue to a business issue as the boardroom looks at ways to improve efficiency and save costs. IT teams need to embrace the journey of digital transformation to ensure objectives are met quickly, efficiently and cost-effectively.
IT teams are therefore under more pressure than ever to provide a modern infrastructure setup and applications that can help businesses remain competitive. But digital transformation is complex, and pressure on IT isn’t just being applied by one department. Pressure is coming from all business units, finance and legal, and all network and security infrastructure owners. Each of these areas has individual requirements for tackling transformation and trying to cater to each group’s needs at the same time is almost impossible. IT teams have to pick their battles wisely to avoid hitting an impasse on the transformation journey.
IT teams have always been under pressure to deliver, but the technology environment has changed significantly from even a decade ago. Back in 2008, cloud computing was in its relative infancy—AWS had only been relaunched in its current format two years earlier—and many organisations relied on traditional technology and software procurement cycles, which included roadmaps that looked three to five years ahead. IT teams were expected to adopt the technology that could support the business’ wider progression plans for the entirety of that timeframe, and there was often little tolerance for failure. Whether it’s advancements such as the Internet of Things (IoT) or the increased use of Software-as-a-Service (SaaS) or collaborative tools, companies in today’s IT world cannot afford not to prioritise their battles and plan efficiently.
Today, however, the rigid roadmap approach is obsolete. Technology is evolving rapidly, some could say even every day, and businesses that only look to change after multiyear planning periods will forever be playing catch-up with competitors making use of disruptive capabilities. Whether it’s advancements in data centre consolidation, the IoT, greater use of SaaS and Infrastructure-as-a-Service (IaaS) offerings, collaborative tools, or any of the countless others, companies can’t afford to switch off from evolution at any point.
The changing ways of working are having an influence too. More individuals are working from home or other locations outside the office; employees want access to more intuitive business applications rather than clunky software, and the C-suite wants to make better use of big data and IoT to make processes more streamlined and cost-effective. This all adds up to a lot of requirements to be juggled by IT, which will want to be seen as a facilitator of digital transformation rather than a restrictive bottleneck.
Taking a step back and seeing the bigger picture
However, IT can’t simply become a team of “yes” people. For every capability that technology can unlock, it raises questions around business performance, change and process management, and, of course, security. How can the ever-increasing number of users embracing new cloud technologies and applications be protected? How can sensitive data be locked down to avoid exposure? What’s needed to ensure compliance with GDPR and other data laws? How can an enterprise gain complete visibility into data flows and cyber threats — both internally and across the wider threat landscape? Will the current internet setup be sufficient for the expanding geography of users and applications? These are a few examples of the complexity involved in adapting to the current transformation trend.
Attempting to satisfy all requirements at the same time is too complex, and IT teams risk losing sight of the bigger picture. They should therefore take a huge step back and come at things from a different angle. Instead of thinking about technology that can satisfy one or two requirements, they need to find tools that ideally fulfil multiple purposes at once. Not only will this provide a more cost-effective alternative, but it provides time and resources for IT to then focus on other issues.
Enterprise-wide transformative technology has come a long way and can now help businesses overcome a lot of issues at once — all with what can be a relatively small investment of time and resources. However, enterprises have to plan their transformation strategy with the bigger picture in mind—it is not only about moving applications to the cloud. Organisations have to consider the consequences such a move will have on the entire network and security infrastructure. With an increasing number of employees working from branch offices or on the road, and accessing their data in the cloud remotely, latency can become an issue with the traditional hub-and-spoke network setup. These days, employees expect hassle-free connectivity to enterprise networks and data. However, in traditional network setups, traffic is routed back to the corporate data centre for security scans, which hampers performance. The same situation can arise when it comes to enabling remote access VPN for mobile users.
In other words, taking one step towards digital transformation without taking the bigger picture into account can be counterproductive, opening up new battles that result in unhappy users contending with a sub-standard experience.
Prior to moving applications to the cloud, companies should consider how this step would affect network traffic patterns and then plan accordingly, taking local internet access into consideration. Similarly, software-defined wide-area networking (SD-WAN) can help to solve one battleground, but not without security requirements at branch offices being given due consideration.
Ultimately, IT teams that attempt to handle individual issues and requirements as they arise will soon become overwhelmed. With numerous tools and solutions available, it can be tempting for businesses to start adopting new technology on a case-by-case basis, yet this often results in bloated networks, overlapping technology, and costly and unnecessary licences. The speed to deploy and improve performance with the cloud, SaaS, and business applications must be put at the forefront, but how and when to deploy must be chosen carefully. Digital transformation and pressure on IT teams within businesses is an ongoing and tricky minefield to navigate. Therefore, it is important for IT teams take a step back and choose their battles more wisely, they are better positioned to use their expertise to suggest and adopt the technology that will empower the entire organisation to thrive.
With 2019 just around the corner, this issue of Digitalisation World includes major coverage of technology predictions for the new year. Part 7.
Multi-cloud and micro-services will bolster digital transformation in 2019, says Derren Nisbet, UK and Ireland Managing Director and Regional President at Unit4.
Technology advances and business demands mean that enterprise technology is unrecognisable from what was sold just five years ago. The focus is on technologies that can boost people productivity and the ability to serve customers. In 2018, we said that enterprise tech would continue its rebirth this year, with success revolving around ensuring excellent customer service and by delivering what the customer wants as quickly and efficiently as possible.
In 2019, we believe we’ll see organisations focusing further on simplifying their enterprise architectures, transforming for the everything-as-a-service economy and changing the way they operate. With this in mind, we’ve spoken to customers and compiled a list of key trends we believe will affect large services businesses in 2019:
Organisations realise that going all public cloud, private cloud, or data centre isn’t the best option. So, we believe connected clouds or multi-cloud approaches will continue to develop to meet the business change and digital transformation requirements of organisations that need a mix of cloud environments.
Augmented analytics and Natural Language Processing
By the end of 2019, finance and management teams will benefit from the automation machine learning algorithms can bring to massive data sets. They won’t be spending time on transactional tasks that don’t add value. Deep learning algorithms will take advantage of relevant information and help put it into context, memorising learnings and applying them to new inputs. Organisations will benefit from a multi-layer neural network that learns by example and improves the quality of its results over time.
New user experiences
We’re seeing a big change of focus from UI to modern user experiences (UX) such as conversational UX and focused task apps where the user experience is all about the optimal way to arrive at a specific outcome. Conversational UX allows users to have an enhanced experience with conversational systems that are human-like such as enterprise software that leverages chatbots via instant messaging, virtual assistants, and other AI-powered apps and devices that help users get answers to their questions.
Changing application architecture strategies to put an end to massive software customisation projects
Big monolithic software applications will be replaced by more flexible, distributed and scalable architectures. Enterprise software will become more open for customers to build custom apps. We’re already seeing the migration from monolithic software stacks to microservices that help isolate and compartmentalise software development. Breaking apart code in this manner allows organisations to focus exclusively on specific areas with minimal impact overall. This trend will continue in 2019 with organisations preparing their technology infrastructures for this approach to software adoption, negating the need for massive customisation projects.
The citizen developer market is set to grow and represents a new generation in the workforce. They see technology as a means to create value in their work, opening doors to innovation and higher efficiency, and providing new ways to accomplish goals. Tools are emerging that allow them to develop front-end applications that map exactly to the processes used by their organisations, taking advantage of business data and intelligence that was once relegated to the back office. Vendors are redesigning software architectures to support this change, enabling customers to build out from the core, using loosely coupled microservices so employees can create service-enhancing ERP extensions in their own image.
Edge Computing will change how we process data. We’ll see more computing happening at initial data capture to remove processing workload from the server side. This is essentially what’s already happening with IoT; however, in 2019, we’ll see this in other non-IoT uses cases as well, like ensuring financial compliance locally instead of in a central data-centre. Edge Computing takes advantage of microservices architectures where chunks of application functionality can be sent to edge devices, which expands computing power indefinitely.
As the AI market matures, chatbot consolidation will begin. Having a different chatbot for everything isn’t good UX, so chatbot consolidation will begin. The code needed for a chatbot to perform its dedicated task on the backend is still valuable, so instead chatbots will be recoded as the go-between with a customer-facing chatbot and the enabling backend software. This means the user will ask Cortana (for example) to perform a task, Cortana will ask the bot, the bot will perform the task, and Cortana will inform the user that the task is complete. Chatbots therefore interface with other solutions such as banks, airlines, ERP software and so on and become part of a larger ecosystem that’s one step removed from the consumer.
DW talks technology with NaviSite, an international provider of enterprise class, cloud-enabled hosting, managed applications and services. The company’s success in this field was recently recognised at the SVC Awards.
Can you talks us through some of the company’s key milestones to date?
Please can you provide a high level view of NaviSite’s technology portfolio?
Navisite, LLC, a part of Spectrum Enterprise, is a leading international provider of enterprise-class, cloud-enabled hosting, managed applications and services. Navisite provides a full suite of reliable and scalable managed services, including Application, Cloud Desktop, Cloud Infrastructure and Hosting services for organizations looking to outsource IT infrastructures to help lower their capital and operational costs. Enterprise clients depend on Navisite for customized solutions, delivered through an international footprint of state-of-the-art data centres.
In more detail, please can you talks us through the NaviSite portfolio, starting with IaaS?
Migrating to the cloud faster and with better results while freeing-up internal IT resources is at the heart of IaaS and Navisite. Proven experts will oversee your complete migration to our VMware-based clouds or Microsoft Azure and ensure the platform is optimized to meet your business use cases. Navisite’s cloud migration methodology takes advantage of an array of tools and best practices to ensure a successful migration with minimized risk.
And then there are Application Services?
Secure, resilient operation of enterprise applications is crucial to modern business. Maintaining the integrity of ERP applications and communications resources can be challenging and time-consuming, often restricting IT personnel from focusing on other projects with larger business implications. Navisite offers application services designed to help simplify, streamline and secure enterprise applications across the organization and for an array of distinct business functions.
Navisite’s Managed DaaS securely delivers virtual desktops across the enterprise, with an eye on delivering a truly client-centric solution, letting you choose the levels of service that best suit your business needs.
Our latest iteration builds off of VMware’s powerful new Horizon 8.0 platform and combines that with industry-leading NVIDIA GPUs, to offer our clients a range of partially to fully-managed VDI, through to full Desktop-in-the-Cloud experience, by marrying Horizon 8.0 with Office 365 and Windows 10.
Navisite DaaS continues to offer enhanced, extended capabilities to support accelerated graphics workstations and sensitive data services, ensuring your employees can get access to the desktop environment they need, no matter where they work.
Navisite is a premiere managed hosting provider, offering fully managed IT infrastructure and applications in highly secure traditional or cloud-enabled environments. Enterprise-class infrastructures require expert managed-hosting services that let your IT personnel focus on end-users and spend less time on upgrading, configuring and managing hardware, middleware and applications.
Navisite offers a rich portfolio of enterprise managed hosting solutions designed to optimize mission critical IT infrastructure performance and provide flexibility to meet current business needs and future demand. Our world-class hosting support enables rapid resource provisioning with additional hardware, middleware, or application support to meet your IT requirements.
In terms of solutions, your offer disaster recovery in the Cloud?
Regular data backups are not enough to keep you up and running when a failure, cyberattack, natural disaster or human error strikes. An advanced replication, recovery and resumption solution such as Navisite Managed Disaster Recovery Cloud can keep your business up when things go down.
And you offer help moving legacy apps to the Cloud?
Yes – we offer a full professional service including Cloud Assessment and Managed Migration – again we have boiler-plate for this.
And Oracle solutions are another major focus?
Yes – we have been supporting Oracle solutions for over 15 years and we are currently a “XXX” partner.
As well as protecting the mobile workforce?
Yes – we have managed solutions that will secure endpoint devices, secure access to an organisations data and applications for mobile users and our managed DaaS and Citrix solutions deliver secure desktops to thin client and mobile devices.
Looking at industry trends more generally, how do you see the edge making an impact over time?
Not one for us – really an IoT question.
And then there’s the intelligent automation piece?
We’ve begun to make significant investments in AI Operations, using AI to deliver predictive, automated support services for our clients. The other area we see significant use of AI is in the security arena, where automated threat detection and response are really the only practical solutions to the developing threat landscape.
And any thoughts on the likely impact of 5G?
It will obviously facilitate better mobile access to applications and data; faster connectivity potentially means you can move more of the processing into the cloud or other centrally hosted systems and reduce the “thickness” of endpoint devices. Potentially this means an increase in the need for secure, cloud-hosted systems; but also increases the complexity and security concerns.
Any other trends we should look out for?
Server-less computing and the use of containers will be bigger topics next year, with more practical use cases and wider adoption. We are also already seeing that the initial wave of migrations to cloud, based around movement of in-house servers or VM’s to IaaS, is being replaced by adoption of more advanced services like PaaS.
Finally, what can we expect from NaviSite in 2019?
We will continue to invest in our partnership with Microsoft – building out more manged capabilities on their Azure platform to help our customers migrate to this and help them take best advantage of the advanced functions on Azure that grow in number all the time. Protecting the availability and integrity of data is also a key focus for us in 2019 as we work to strengthen our DRaaS, BC/DR and backup offerings with partners like Commvault, VMware and Zerto. We will also continue our investments in security / compliance tools and process – as we seek to abstract the complexity of securing data across multi-cloud environments. Lastly we see the increasing skills gap as the biggest barrier to transformation for our clients, so our ongoing investment in our people in terms of training and process are key to realising all of our ambitions for 2019 and beyond.
DW talks technology with EACS, an IT services company with 20 years + experience of supplying software, hardware and, more recently, managed services to the UK market. The company’s continuing success - highlighted by its recent win at the SVC Awards - is based on key partnerships with what seems like a ‘who’s who’ IT vendor list.
Please can you give us a brief background on the company?
EACS has been providing IT services in the UK for over 20 years, starting with supplying hardware and software to local businesses and developing into a strategic IT partner to organisations across the UK. In 2017 the business was acquired by the Streamwire Group and has gone from strength to strength with a number of key Managed Services deals supporting successful projects around the Microsoft Cloud and other solutions with partners such as NetApp, Citrix, Sophos and Mimecast.
And what have been the key milestones to date?
As EACS has been in existence for over 20 years there have been many milestones. But in the last 18 months they have been:
Please can you provide a high level view of EACS’s technology portfolio?
EACS prefers to focus on a number of key partners in order to ensure we continue to be renowned for our deep technical excellence. In the datacentre these are primarily NetApp, HPE, Dell – particularly VMware - Microsoft, Veeam & Nutanix. We have particular strength in the Microsoft Cloud and other key partners include Sophos, Mimecast and Citrix.
And how do you believe that EACS distinguishes itself from its competitors?
We pride ourselves from being very consultative in our approach and this is borne out through our long-term relationships with customers, some in excess of 20 years. We also have an extremely deep technical capability – in excess of 75% of our headcount are technical staff, be it our Service Desk, Professional Services or Solutions team.
In more detail, please can you talk us through the EACS portfolio highlights, starting with the hardware and software offering?
EACS provides hardware and software supply for strategic and business requirements dependent on client requirements. EACS has a team of product specialists who can work with clients to assist them and ensure they gain value throughout the supply chain. As part of this facility, EACS has its own logistic and configuration centre to support its customers and to deliver projects effectively.
And there’s a special focus on the financial sector?
EACS has a number of long-term relationships with Financial Services companies, which we recently strengthened with our acquisition of Sentronex. The Financial Services sector is one that has a mature attitude to risk and is generally able to quantify the financial impact of downtime, and so we have had great success in supporting these organisations with our Workplace Recovery and Backup/DR solutions allied to our high-touch on site Managed Services capability.
Looking at industry trends more generally, how do you see the cloud market developing over time?
Many of our customers are adopting a SaaS-first approach for key applications, from HR to Expenses systems and everything in-between. While this trend continues in SMEs, our more Enterprise customers are reviewing the uptime that can be achieved with a Cloud solution versus the control they can exhibit on-premises. To nobody’s real surprise, a hybrid model is emerging with BAU functions outsourced to the cloud and Tier 1 applications delivered from the datacentre. Over time, portability of workloads will be key – the ability to right-size an infrastructure on-premises to most cost effectively deliver an application, while being able to burst to the public cloud as business cycles dictate. Organisations such as Nutanix, and VMware with their AWS partnership, are well positioned to take advantage of this trend.
And how do you see the edge making an impact over the next few years?
The Intelligent Edge is already here and is only going to grow in relevance. Smart Cities, Connected Cars etc. are already causing enormous quantities of data to be generated and current technology is cost prohibitive in terms of sending that data to an environment powerful enough to crunch the data. This data needs to be cleansed at the edge, with only the crown jewels correlated back at base for analysis and business insight. If I were starting my IT career now, I’d be getting into the field of IoT security – so many connected devices, but in many cases in a race to market that doesn’t cater for the security-first mentality we are used to in the Enterprise.
And then there’s the intelligent automation piece?
Automation is already permeating our IT experience – we have been using autocomplete in search engines for years but don’t think of this as automation. We have seen a significant uptick in our customer base in the use of Robotic Process Automation in recent months; as the technology becomes infused with cognitive capabilities it is really starting to strike a chord with business leaders from a value-add perspective, rather than just a pure ROI discussion. In many cases automation is still seen as a threat; from a vendor perspective those who can humanise AI and make ensure its consumers do not perceive it as a danger will be the most successful, while organisations who are able to see the value in automating the mundane and focussing on that which requires the human touch will gain competitive advantage.
And any thoughts on the likely impact of 5G?
5G is an interesting one – on the one hand, I would like to see efforts focussed on a minimum viable connectivity standard across the country. With our Head Office being in Cambridgeshire we deal with many businesses around East Anglia for whom public cloud is a non-starter due to challenges around connectivity.
On the other hand, 5G has a huge role to play in the Smart City, dealing with the huge quantities of data generated and able to transmit back to base at something close to LAN speeds. If the promises of significantly lower latency are realised it becomes a viable technology backbone for the application of augmented reality in the field. At Microsoft Inspire this year I saw a demonstration of the use of HoloLens, with a junior field engineer directed by a senior colleague on which components to replace in some failed industrial hardware over a mobile Skype session. This is a pipe dream today in some areas of the UK; perhaps 5G can make this a reality.
And what about mobility and/or big data – how do you see these developing on their own and connected with some of the technologies already covered?
Mobility will continue to be a dominant trend. More and more organisations are allowing flexible or outcome-based working – including EACS! – where job roles permit and the rise of SaaS applications makes this simpler to facilitate. The flip side of this is it obviously brings security challenges along with it – every time I catch I train I can see somebody on the public wifi accessing sensitive data in full view of anybody who can see their screens. As the proportion of the workforce who have grown up with a smartphone in hand continues to increase, mobility will become a talent attraction - and indeed retention – mechanism. Vendors focussed on SaaS application security will have a lot of success over the next couple of years, particularly as breaches continue to be front-page news.
Big Data is already at the heart of our experiences with the Enterprise – from supermarket loyalty cards to the traffic alerts on your satnav – and I think this will continue, with Analytics infused with AI. The frameworks created for management of big data by the hyperscalers will continue to make the technologies more accessible to the SME and Midmarket and there is already a burgeoning valued-added partner ecosystem springing up around making these tools work for smaller organisations.
Any other trends we should look out for?
2019 is hard to predict; with the spectre of Brexit looming it’s tough to say what the impact will be other than when uncertainty looms IT spending typically focusses on keeping the lights on rather than adding value. To this end I expect to see an increase in interest in refurbished end user hardware, which we have already seen real interest in from a “green” perspective in 2018. More broadly, any vendor who can really crack the multi-cloud portability message will be in a strong position and as cybercrime continues to hit the headlines I’m sure we will see some innovation here, with 2019 perhaps being the year sandboxing and isolation technologies for web browsing becoming mainstream.
Finally, what can we expect from EACS in 2019?