TSB seems just the latest in a depressingly long list of high profile organisations that have suffered some kind of, what can only be described as ‘catastrophic’, IT meltdown. And we’re not talking the cybersecurity problems that beset more and more companies and are, sad to say, just a part of everyday life. No, ransomware, viruses and the like can cause mayhem, but they are all but impossible to defend against (the bad guys are always ahead of the good guys). IT catastrophes, where something goes wrong during a migration process, or where flipping to a DR site brings only further chaos and confusion are, on the whole, completely avoidable.
OK, so the complexity levels of a massive corporation’s IT infrastructure are getting beyond human understanding. Nevertheless, with the proper level of due diligence and careful planning, it should be possible to understand all of the issues surrounding any proposed IT refresh, migration, or minor tweak. However, in addition to the required brains trust, there is one more crucial ingredient that, one suspects, is frequently the main cause of the IT disasters that hit the headlines – money. We all have our own views on insurance – cars and houses are pretty much compulsory, but what about health insurance, long term sickness insurance, life insurance, holiday insurance, insurance insurance? If money was no object, no doubt we’d all sign up to everything. But money is a major problem, so we all decide what we can afford, what we can’t, what really matters and what doesn’t. It’s called risk assessment.
Back to our IT scenario and, when it comes to, say, migration, if money was no object, it would be entirely possible to build a completely new infrastructure, build new applications that you know work on this new infrastructure, test it, test it some more and then plug it in and go live, turning off the legacy infrastructure and apps at the same time. Clearly, such an approach is, for all but a few, prohibitively expensive. So, decisions have to be made about what gets refreshed, what legacy kit and apps remain, how the mix should work together, where are the potential problems, how do we overcome them?
I’d love to be able to offer the foolproof formula for transforming IT in a pain, stress and catastrophe-free fashion. Sadly, there is no such thing, just the need for a properly thought-out and executed risk assessment, where no corners are cut. Yes, it might seem painful to spend six figure sums or larger as ‘just in case’ insurance, but if the potential alternative is reputational damage and the possible closure of your business, well, do you have a choice?
IT spending from line-of-business (LOB) will reach $191 billion in Western Europe in 2018, a 5.7% increase over 2017, according to International Data Corporation's (IDC) latest update of the Worldwide Semiannual IT Spending Guide: Line of Business. IDC forecasts that Western European business-funded spending will grow at a 5.5% compound annual growth rate (CAGR) between 2016 and 2021. In comparison, IT-funded technology spending will grow at a slower pace, with a 2.1% five-year CAGR — meaning that business managers will invest more aggressively over the years. While IT departments will remain the biggest funding source in 2018, funding 52% of technology investments, LOBs will catch up by 2021 and will fund 50% of the overall spending in Western Europe ($222 billion).
In 2018, IT departments will continue to fund most of the software, hardware, and IT services, while LOBs will be the primary funding source for business services. Business-funded spending on software will grow fast, driven by stronger investments in cloud software, but IT organizations will remain the top funding source in 2018 in Western Europe. Looking at hardware, IT spending on network equipment, IaaS, and server/storage will continue to be funded more by IT departments. On the other hand, there are hardware solutions that business managers control more, such as mobile phones, PCs, tablets, and peripherals.
"Business managers prefer to choose the IT solutions they need daily, to share content or access information, and the devices they can use in mobility. IT departments do not necessarily understand what LOBs need, consequently shadow IT is a way through which tech-savvy employees adopt solutions independently and fill the gaps that IT managers are not able to address," said Andrea Minonne, research analyst, IDC Customer Insights and Analysis.
IDC expects business-funded IT spending in 2018 to be larger than IT-funded spending in discrete manufacturing, healthcare, security and investments services, and telecommunications. By 2021, banking, insurance, and utilities will also see LOB investments move ahead of IT purchases. The industries with the fastest growth in LOB spending are professional services (6.2% CAGR), retail (6.0%), and process manufacturing (6.0%). LOB IT spending is forecast to grow faster than that of the IT organization in all 16 industries covered in the spending guide.
At the end of 2017, worldwide customer relationship management (CRM) software revenue overtook that of database management systems (DBMSs), making CRM the largest of all software markets, according to Gartner, Inc.
Worldwide CRM software revenue amounted to $39.5 billion in 2017 overtaking DBMS revenue, which reached $36.8 billion in the same year. "In 2018, CRM software revenue will continue to take the lead of all software markets and be the fastest growing software market with a growth rate of 16 percent," said Julian Poulter, research director at Gartner.
Mr. Poulter added that the strong growth rate of CRM software revenue is driven in particular by the segments of lead management, voice of the customer and field service management, each of which is achieving more than 20 percent growth.
Gartner has witnessed the rise of marketing technology and a recent resurgence in sales technology in the CRM market. This growing market is attracting many new entrants. However, the major vendors offering CRM suites covering sales, commerce and service are showing stronger than average growth and are successful at cross-selling additional modules to existing customers.
"Organizations are keen to avoid silos of information and to obtain a 360 degree view of the customer, said Mr. Poulter."The 360 degree view allows better application of artificial intelligence to make the users of the CRM system more effective."
Cost of GDPR Compliance Expected to Increase Budget for Information Security, CRM and CX
CRM systems typically contain vast amounts of (sensitive) personal data and are kept for a considerable amount of time, making them a more likely source of noncompliance with the European General Data Protection Regulation (GDPR) than other applications.
The cost of GDPR compliance is expected to increase the existing budget for information security, CRM and customer experience (CX) in the next three years. Areas for technology investment include marketing technology, data loss prevention, security information and event management, and security consulting, especially in Western Europe.
"It is critical that organizations are compliant with GDPR as soon as possible, or at the very latest May 25, because when customers don't trust an organization's customer data protection, they put their own safeguards in place, like providing false data or closing accounts," said Bart Willemsen, research director at Gartner. This reduces an organization's chances of reaching the right customers with the right offers at the right time."
Mr. Willemsen added: "Poor CRM will lead to a privacy violation and a GDPR sanction. Application leaders need to enhance control over personal data usage throughout the data life cycle and safeguard processed personal data so that it is not used beyond the context of predefined and documented use cases."
The worldwide public cloud services market is projected to grow 21.4 percent in 2018 to total $186.4 billion, up from $153.5 billion in 2017, according to Gartner, Inc.
The fastest-growing segment of the market is cloud system infrastructure services (infrastructure as a service or IaaS), which is forecast to grow 35.9 percent in 2018 to reach $40.8 billion (see Table 1).
Gartner expects the top 10 providers to account for nearly 70 percent of the IaaS market by 2021, up from 50 percent in 2016.
"The increasing dominance of the hyperscale IaaS providers creates both enormous opportunities and challenges for end users and other market participants," said Sid Nag, research director at Gartner.
"While it enables efficiencies and cost benefits, organizations need to be cautious about IaaS providers potentially gaining unchecked influence over customers and the market. In response to multicloud adoption trends, organizations will increasingly demand a simpler way to move workloads, applications and data across cloud providers' IaaS offerings without penalties."
Table 1. Worldwide Public Cloud Service Revenue Forecast (Billions of U.S. Dollars)
2017 | 2018 | 2019 | 2020 | 2021 | |
Cloud Business Process Services (BPaaS) | 42.6 | 46.4 | 50.1 | 54.1 | 58.4 |
Cloud Application Infrastructure Services (PaaS) | 11.9 | 15.0 | 18.6 | 22.7 | 27.3 |
Cloud Application Services (SaaS) | 60.2 | 73.6 | 87.2 | 101.9 | 117.1 |
Cloud Management and Security Services | 8.7 | 10.5 | 12.3 | 14.1 | 16.1 |
Cloud System Infrastructure Services (IaaS) | 30.0 | 40.8 | 52.9 | 67.4 | 83.5 |
Total Market | 153.5 | 186.4 | 221.1 | 260.2 | 302.5 |
BPaaS = business process as a service; IaaS = infrastructure as a service; PaaS = platform as a service; SaaS = software as a service
Note: Totals may not add up due to rounding.
Source: Gartner (April 2018)
Software as a service (SaaS) remains the largest segment of the cloud market, with revenue expected to grow 22.2 percent to reach $73.6 billion in 2018. Gartner expects SaaS to reach 45 percent of total application software spending by 2021.
"In many areas, SaaS has become the preferred delivery model," said Mr. Nag. "Now SaaS users are increasingly demanding more purpose-built offerings engineered to deliver specific business outcomes."
Within the platform as a service (PaaS) category, the fastest-growing segment is database platform as a service (dbPaaS), expected to reach almost $10 billion by 2021. Hyperscale cloud providers are increasing the range of services they offer to include dbPaaS.
"Although these large vendors have different strengths, and customers generally feel comfortable that they will be able to meet their current and future needs, other dbPaaS offerings may be good choices for organizations looking to avoid lock-in," said Mr. Nag.
Although public cloud revenue is growing more strongly than initially forecast, Gartner still expects growth rates to stabilize from 2018 onward, reflecting the increasingly mainstream status and maturity that public cloud services will gain within a wider IT spending mix.
This forecast excludes cloud advertising, which was removed from Gartner's public cloud service forecast segments in 2017.
Two-thirds of organizations are not adequately addressing the infrastructure and operations (I&O) skills gaps that will impede their digital business initiatives, according to Gartner, Inc. Successful I&O organizations will need to implement vastly different roles and technologies during the next five years.
Gartner forecasts that, by 2019, IT technical specialist hires will fall by more than 5 percent. Moreover, by 2021, 40 percent of IT staff will hold multiple roles, most of which will be business-related rather than technology-related.
"What made I&O leaders successful in the past is not what will make them thrive in the future," said Hank Marquis, research director at Gartner. "Instead of focusing on the 'what' of I&O jobs — such as technical knowledge, education and training — I&O leaders need to shift their focus to the 'how' — the behavioral competencies required."
According to Mr. Marquis, IT operations organizations are being forced to redefine their roles and value propositions from those of technology providers, to become trusted advisors and differentiated business partners. The challenge is that most I&O professionals do not yet have the broad skillsets that organizations will need from them.
Gartner predicts that, by 2020, 75 percent of organizations will experience visible business disruptions due to I&O skills gaps, which is an increase from less than 20 percent in 2016. Given the lack of digital dexterity for hire, I&O leaders must begin by developing these skills with the talent they already have. Most companies don't have an accurate inventory of the available skills of their current IT workforces, so this must be a first step.
"Corporate digital business universities will eventually emerge to close the skills gap. Experience-based career paths with formal mentoring for and within I&O will become standard for individual development," said Mr. Marquis. "In the meantime, I&O leaders should work hand-in-hand with HR to shift away from position-based development, develop a tactical skills gap analysis, and utilize tools and methods for improving I&O skills in-house."
Skills gaps occur around emerging technology, as well as management
"The key to delivering digital value at scale is having the right people," said Mr. Marquis. "As well as the required skills, people must have the desire and aptitude to exploit existing and emerging technologies." Gartner predicts that, through 2020, 99 percent of artificial intelligence (AI) initiatives in IT service management will fail, due to the lack of an established knowledge management (KM) foundation.
"Hype about AI is growing, as consumers become familiar with virtual assistants using conversational platforms," said Chris Matchett, principal research analyst at Gartner. "I&O leaders responsible for the IT service desk are looking to exploit this to optimize IT support, but neither the technology nor the workplace is really ready to depend on virtual agents."
KM is essential for a chatbot or virtual support agent (VSA) to provide answers to business consumers, but the response can only repeat scripted solutions when based on existing data from a static knowledge base. VSAs without access to this rich source of knowledge cannot provide intelligent responses, forcing I&O leaders to establish or improve KM initiatives.
Before implementing chatbot or VSA technology, Gartner recommends establishing a foundation in KM by using techniques such as knowledge-centered service that focus on knowledge as a key asset.
Once chatbots and VSAs are in use, care should be taken to avoid conversational dead ends by automating escalation to traditional channels when knowledge responses fail to satisfy the issue. Logic should also be embedded into the chatbot to collect user feedback and identify the relevance of knowledge responses.
Internet of Things (IoT)-based attacks are already a reality. A recent CEB, now Gartner, survey found that nearly 20 percent of organizations observed at least one IoT-based attack in the past three years. To protect against those threats Gartner, Inc. forecasts that worldwide spending on IoT security will reach $1.5 billion in 2018, a 28 percent increase from 2017 spending of $1.2 billion.
"In IoT initiatives, organizations often don't have control over the source and nature of the software and hardware being utilized by smart connected devices," said Ruggero Contu, research director at Gartner. "We expect to see demand for tools and services aimed at improving discovery and asset management, software and hardware security assessment, and penetration testing. In addition, organizations will look to increase their understanding of the implications of externalizing network connectivity. These factors will be the main drivers of spending growth for the forecast period with spending on IoT security expected to reach $3.1 billion in 2021 (see Table 1)."
Table 1
Worldwide IoT Security Spending Forecast (Millions of Dollars)
2016 | 2017 | 2018 | 2019 | 2020 | 2021 | |
Endpoint Security | 240 | 302 | 373 | 459 | 541 | 631 |
Gateway Security | 102 | 138 | 186 | 251 | 327 | 415 |
Professional Services | 570 | 734 | 946 | 1,221 | 1,589 | 2,071 |
Total | 912 | 1,174 | 1,506 | 1,931 | 2,457 | 3,118 |
Source: Gartner (March 2018)
Despite the steady year-over-year growth in worldwide spending, Gartner predicts that through 2020, the biggest inhibitor to growth for IoT security will come from a lack of prioritization and implementation of security best practices and tools in IoT initiative planning. This will hamper the potential spend on IoT security by 80 percent.
"Although IoT security is consistently referred to as a primary concern, most IoT security implementations have been planned, deployed and operated at the business-unit level, in cooperation with some IT departments to ensure the IT portions affected by the devices are sufficiently addressed," explained Mr. Contu. "However, coordination via common architecture or a consistent security strategy is all but absent, and vendor product and service selection remains largely ad hoc, based upon the device provider's alliances with partners or the core system that the devices are enhancing or replacing."
While basic security patterns have been revealed in many vertical projects, they have not yet been codified into policy or design templates to allow for consistent reuse. As a result, technical standards for specific IoT security components in the industry are only now just starting to be addressed across established IT security standards bodies, consortium organizations and vendor alliances.
The absence of "security by design" comes from a lack of specific and stringent regulations. Going forward, Gartner expects this trend to change, especially in heavily regulated industries such as healthcare and automotive.
By 2021, Gartner predicts that regulatory compliance will become the prime influencer for IoT security uptake. Industries having to comply with regulations and guidelines aimed at improving critical infrastructure protection (CIP) are being compelled to increase their focus on security as a result of IoT permeating the industrial world.
"Interest is growing in improving automation in operational processes through the deployment of intelligent connected devices, such as sensors, robots and remote connectivity, often through cloud-based services," said Mr. Contu. "This innovation, often described as Industrial Internet of Things (IIoT) or Industry 4.0, is already impacting security in industry sectors deploying operational technology (OT), such as energy, oil and gas, transportation, and manufacturing."
After a decline of 3 percent in 2017, worldwide shipments of devices — PCs, tablets and mobile phones — are forecast to return to growth (+1.3 percent) in 2018 and will total 2.3 billion units, according to Gartner, Inc.
While the performance of shipments of devices fluctuate year over year, end-user device spending continues to rise and is forecast to increase 7 percent in 2018. "Driven by better specifications despite increasing costs ASPs for devices rose by 9.1 percent in 2017, and this trend will continue through this year, where we expect prices will increase by 5.6 percent," said Ranjit Atwal, research director at Gartner.
Despite PC prices increasing 4.6 percent in 2018, PC market unit demand, driven by business buying, is stabilizing through 2018 The traditional PC market will decline 3.9 percent in units in 2018, and is expected to decline a further 3.6 percent during 2019 (see Table 1).
Table 1
Worldwide Device Shipments by Device Type, 2016-2019 (Millions of Units)
Device Type | 2016 | 2017 | 2018 | 2019 |
Traditional PCs (Desk-Based and Notebook) | 220 | 204 | 196 | 189 |
Ultramobiles (Premium) | 50 | 58 | 67 | 77 |
Total PC Market | 270 | 263 | 263 | 266 |
Ultramobiles (Basic and Utility) | 169 | 158 | 159 | 159 |
Computing Device Market | 439 | 420 | 422 | 425 |
Mobile Phones | 1,893 | 1,841 | 1,870 | 1,892 |
Total Device Market | 2,332 | 2,262 | 2,292 | 2,317 |
Source: Gartner (April 2018)
Regional Market Recovering at Different Pace
Economic upheaval has affected the demand for devices variably across regions. Argentina, Brazil, Japan and Russia collectively lost nearly 25 percent of their device shipments between 2013 and 2017. "While the rate of recovery is varied regionally, most of the device types are now showing growth across these countries," said Mr. Atwal.
In countries suffering from significant economic turmoil the extended lifetimes across all device types tend to remain stagnant. "As a result, as markets recover, they will fail to reach the unit volumes previously seen and will only recover to around 70 percent of those shipments by 2022," added Mr. Atwal.
Worldwide Mobile Phone Lifetimes to Increase Through 2020
Gartner forecasts that global mobile phone shipments will increase 1.6 percent in 2018, with total mobile phone sales amounting to almost 1.9 billion units. In 2019, smartphone sales are on pace to continue to grow, at 5 percent year over year.
Overall, Gartner estimates that mobile phone lifetimes will increase from 2017 through 2020. "Premium phone lifetimes are expected to increase the most in the near-term, as users look to hold onto these devices due to a lack of new technology impact, prohibiting upgrades," said Anshul Gupta, research director at Gartner.
Mobile phone lifetimes will start to reduce again beyond 2020. "By 2020, artificial intelligence (AI) capabilities on smartphones will offer a more intelligent digital persona on the device. Machine learning, biometrics and user behavior will improve the ease of use, self-service and frictionless authentications. This will allow smartphones to be more trusted than other credentials, such as credit cards, passports, IDs or keys," said Mr. Gupta.
Future AI capabilities, including natural-language processing and machine perception (reading all sensors), will allow smartphones to learn, plan and solve problems for users. "This is not just about making the smartphone smarter, but augmenting users by reducing their cognitive load and enabling a ‘Digital Me’ that sits on the device," said Mr. Gupta.
Research firm CCS Insight forecasts 22 million virtual reality (VR) and augmented reality (AR) headsets and glasses will be sold in 2018, with fivefold growth to 121 million units in 2022.
The latest forecast published by technology research firm CCS Insight shows the worldwide market for virtual reality (VR) and augmented reality (AR) head-worn devices growing by an average of 50 percent annually over the next five years. In 2022, a total of 121 million units will be sold, with a value of $9.9 billion.
George Jijiashvili, CCS Insight's senior analyst for wearables, comments, "Virtual reality headsets have been the main source of growth in unit sales to date, and we expect this will continue, particularly headsets that use a smartphone. However, we expect stand-alone headsets like the Oculus Go and HTC Vive Focus to ignite a new wave of growth that will help broaden the appeal of virtual reality, particularly with businesses and in education".
Gaming remains the primary reason for sales of VR devices and this is unlikely to change over the next few years. CCS Insight's consumer research shows that nearly 70 percent of customers who own a dedicated VR headset such as an Oculus Rift, HTC Vive or Sony PlayStation VR have bought games for it, and more than half of smartphone VR headset owners have bought games for their device.
Jijiashvili notes, "There's a growing array of exciting new content being developed. We were encouraged to see in our latest consumer survey that virtual tourism, remote participation in events such as music concerts, and virtual social interactions are all emerging as further uses for virtual reality. Watching video is also proving popular, particularly on smartphone-based headsets".
AR smart glasses also feature prominently in CCS Insight's forecast. As Jijiashvili states, "Given augmented reality is one of the hottest new technology areas in smartphones, it's not surprising that interest is mounting in augmented reality glasses. Billions of dollars are being invested in this technology and we've seen significant improvements in the size, weight and design of smart glasses over the past 12 months".
However, CCS Insight's forecast signals disappointingly low adoption of smart glasses to date, particularly among businesses. CCS Insight estimates that just 24,000 AR smart glasses were purchased by business in 2017. Jijiashvili comments, "The potential of this technology is clear, but so far most companies are evaluating a few units to see how the technology fits into their operations".
However, CCS Insight predicts that over the next few years businesses will move from limited trials to wider deployment, with sales reaching a milestone of 1 million units in 2022.
CCS Insight expects consumer smart glasses to develop differently from enterprise-centric products. The research firm believes that consumer demand for smart glasses could build in a similar manner to other new smart products. Jijiashvili notes, "We're encouraged by the technology developments in smart glasses for consumers. Products such as Intel's Vaunt glasses are a clear signal of the direction these devices are moving in, with a design little different from a pair of standard prescription glasses. It only takes a big company like Apple to jump into the market and we could be looking at market of millions of smart glasses in no time at all".
CCS Insight's latest forecast reflects the positive outlook for consumer smart glasses, with sales volume of 4.5 million units predicted in 2022.
Global business value derived from artificial intelligence (AI) is projected to total $1.2 trillion in 2018, an increase of 70 percent from 2017, according to Gartner, Inc. AI-derived business value is forecast to reach $3.9 trillion in 2022.
The Gartner AI-derived business value forecast assesses the total business value of AI across all the enterprise vertical sectors covered by Gartner. There are three different sources of AI business value: customer experience, new revenue, and cost reduction.
"AI promises to be the most disruptive class of technologies during the next 10 years due to advances in computational power, volume, velocity and variety of data, as well as advances in deep neural networks (DNNs)," said John-David Lovelock, research vice president at Gartner. "One of the biggest aggregate sources for AI-enhanced products and services acquired by enterprises between 2017 and 2022 will be niche solutions that address one need very well. Business executives will drive investment in these products, sourced from thousands of narrowly focused, specialist suppliers with specific AI-enhanced applications."
AI business value growth shows the typical S-shaped curve pattern associated with an emerging technology. In 2018, the growth rate is estimated to be 70 percent, but it will slow down through 2022 (see Table 1). After 2020, the curve will flatten, resulting in low growth through the next few years.
Table 1. Forecast of Global AI-Derived Business Value (Billions of U.S. Dollars)
| 2017 | 2018 | 2019 | 2020 | 2021 | 2022 |
Business Value | 692 | 1,175 | 1,901 | 2,649 | 3,346 | 3,923 |
Growth (%) |
| 70 | 62 | 39 | 26 | 17 |
Source: Gartner (April 2018)
"In the early years of AI, customer experience (CX) is the primary source of derived business value, as organizations see value in using AI techniques to improve every customer interaction, with the goal of increasing customer growth and retention. CX is followed closely by cost reduction, as organizations look for ways to use AI to increase process efficiency to improve decision making and automate more tasks," said Mr. Lovelock. "However, in 2021, new revenue will become the dominant source, as companies uncover business value in using AI to increase sales of existing products and services, as well as to discover opportunities for new products and services. Thus, in the long run, the business value of AI will be about new revenue possibilities."
Breaking out the global business value derived by AI type, decision support/augmentation (such as DNNs) will represent 36 percent of the global AI-derived business value in 2018. By 2022, decision support/augmentation will have surpassed all other types of AI initiatives to account for 44 percent of global AI-derived business value.
"DNNs allow organizations to perform data mining and pattern recognition across huge datasets not otherwise readily quantified or classified, creating tools that classify complex inputs that then feed traditional programming systems. This enables algorithms for decision support/augmentation to work directly with information that formerly required a human classifier," said Mr. Lovelock. "Such capabilities have a huge impact on the ability of organizations to automate decision and interaction processes. This new level of automation reduces costs and risks, and enables, for example, increased revenue through better microtargeting, segmentation, marketing and selling."
Virtual agents allow corporate organizations to reduce labor costs as they take over simple requests and tasks from a call center, help desk and other service human agents, while handing over the more complex questions to their human counterparts. They can also provide uplift to revenue, as in the case of roboadvisors in financial services or upselling in call centers. As virtual employee assistants, virtual agents can help with calendaring, scheduling and other administrative tasks, freeing up employees' time for higher value-add work and/or reducing the need for human assistants. Agents account for 46 percent of the global AI-derived business value in 2018 and 26 percent by 2022, as other AI types mature and contribute to business value.
Decision automation systems use AI to automate tasks or optimize business processes. They are particularly helpful in tasks such as translating voice to text and vice versa, processing handwritten forms or images, and classifying other rich data content not readily accessible to conventional systems. As unstructured data and ambiguity are the staple of the corporate world, decision automation — as it matures — will bring tremendous business value to organizations. For now, decision automation accounts for just 2 percent of the global AI-derived business value in 2018, but it will grow to 16 percent by 2022.
Smart products account for 18 percent of global AI-derived business value in 2018, but will shrink to 14 percent by 2022 as other DNN-based system types mature and overtake smart products in their contribution to business value. Smart products have AI embedded in them, usually in the form of cloud systems that can integrate data about the user's preferences from multiple systems and interactions. They learn about their users and their preferences to hyperpersonalize the experience and drive engagement.
Vote Now for DCS Awards 2018 – online voting closes 11 May.
With thousands of votes already cast for this year’s DCS Awards, the competition is hotting up. Online voting stays open until 17.30 on Friday 11 May so make sure you don’t miss out on the opportunity to express your opinion on the companies, products and individuals that you believe deserve recognition as being the best in their field.
Voted for by the readership of the Digitalisation World portfolio of titles, the Data Centre Solutions (DCS) Awards reward the products, projects and solutions as well as honour companies, teams and individuals operating in the data centre arena.
Winners of this year’s 21 categories will be announced at a gala ceremony taking place at London’s Grange St Paul’s Hotel on 24 May.
All voting takes place on line and voting rules apply. Make sure you place your votes by 11 May when voting closes by visiting: http://www.dcsawards.com/voting.php
The full 2018 shortlist is below:
Data Centre Energy Efficiency Project of the Year
Romonet supporting Fujitsu UK & Ireland
Riello UPS supporting the Rosebery Group
EcoRacks Data Centre supported by Asperitas
London One Data Centre by Kao Data
New Design/Build Data Centre Project of the Year
Inzai 2 by Colt Data Centre Services
University of Exeter University supported by Keysource
Data Hub, Biel supported by Schneider Electric
Kao Data London One supported by JCA Engineering
Data Centre Consolidation/Upgrade/Refresh Project of the Year
EcoRacks supported by Asperitas
Willis Towers Watson supported by Keysource
Generator Control Panel Replacement by CBRE DC Solutions
Consolidation and expansion Africa Data Centres
New data halls in Corsham and Farnborough by CBRE, ARK and Corning Optical Communications
Data Centre Fire Protection by Bryland Fire Protection Ltd
Data Centre Power Product of the Year
Liebert® APM 30-600 kW by Vertiv
Delta 500kVA UPS by Eltek Power
Integrated Terminal Lug Temperature Sensors by Starline UE
Micro Data Center by Optimum Data Cooling
Data Centre PDU Product of the Year
Intelligent Power Distribution Unit (iPDU) Family by Excel Networking solutions
SmartZone G5 Intelligent PDUs by Panduit Europe
High Density Outlet Technology (HDOT) by Server Technology, a brand of Legrand
Data Centre Cooling Product of the Year
Liebert® PCW High Chilled Water Delta T by Vertiv
En-10 DX by Optimum Data Cooling
1U immersion cooled server by Iceotope Technologies
Oasis Indirect Evaporative Cooler by Munters
Data Centre Facilities Automation and Management Product of the Year
Nlyte 9.0 Data Center Infrastructure Management (DCIM) Solution by Nlyte Software
Micro Data Center by Optimum Data Cooling
Diris Digiware Power Metering and Monitoring System by Socomec
Data Centre Safety, Security & Fire Suppression Product of the Year
303 ECO SSF cabinet by Dataracks
IG55 Extinguishing System by Bryland Fire Protection Ltd
Data Centre Cabling & Connectivity Product of the Year
4K HDMI Single Display KVM over IP Extender by ATEN Technology
EDGE™ Mesh Modules by Corning Optical Communications
LABACUS INNOVATOR SOFTWARE & Fox-in-a-Box by Silver Fox Ltd
Data Centre ICT Storage Product of the Year
Anti-Ransomware Data Protection by Asigra
GridBank's Enterprise Data Management Platform by Tarmin
Computational Storage Solutions by Scaleflux Computational Storage
JovianDSS by Open E
Cohesity DataPlatform by Cohesity
StorPool Storage by StorPool
Data Centre ICT Security Product of the Year
Automated Endpoint Security and Incident Response by Secdo
Cloud Protection Manager by N2W Software
SecuStack by SecuNet Security Networks
Data Centre ICT Management Product of the Year
Tarmin GridBank by Tarmin
VirtualWisdom 5.4 by Virtual Instruments
Ipswitch WhatsUp Gold® 2017 Plus by Ipswitch
HC3 platform by Scale Computing
EcoStruxure IT by Schneider Electric
ParkView by Park Place Technologies
Data Centre Cabinets/Racks Product of the Year
Environ CL Series by Excel Networking Solutions
Knürr DCD Rear Door Heat Exchanger by Vertiv Integrated Systems GmbH
303 ECO SSF cabinet by Dataracks
HyperPod Rack Ready System by Schneider Electric
Data Centre ICT Networking Product of the Year
PORTrockIT by Bridgeworks
Unity EdgeConnect by Silver Peak
Secure Cloud-Native Networking by Meta Networks
Data Centre Hosting/co-location Supplier of the Year
Workspace Technology
Colt Data Centre Services
Volta Data Centres
UKFast.Net Limited
Rack Centre
LuxConnect
Green Mountain
Data Centre Cloud Vendor of the Year
Zerto
N2W Software
PhoenixNAP
Zadara
Claranet
Asigra
Data Centre Facilities Vendor of the Year
Nlyte Software
Dataracks
Asperitas
Excellence in Data Centre Services Award
Rack Centre
Park Place Technologies
4D Data Centres Ltd
Data Centre Energy Efficiency Initiative of the Year
EU Horizon 2020 EURECA Project
Green Mountain
DAMAC
Data Centre Innovation of the Year
Cloud Protection Manager by N2W Software
ParkView by Park Place Technologies
Green Peak – Dashboard by Green Mountain
HyperPod Rack Ready System by Schneider Electric
Data Centre Individual of the Year
Anuraag Saxena, Ekkosense
Konkorija Trifonova, CBRE
Ole Sten Volland, Green Mountain
Dan Kwach, East Africa Data Centre
Although GDPR is probably the best-known example, a wave of regulation and compliance legislation is being enacted across the world, and particularly in Europe, as regulators get to grips with the modern data economy. This can mean conflicting requirements in some territories, or confusing messages for customers and organisations.
The trend in previous years towards a reduction in regulation seems to have ground to a halt. And while the tone and mood of the new rules such as GDPR is seen to be persuasive and “nudging” by those at the senior levels of policy-making, their actual implementation could well see a “big stick approach” by local and national law-makers.
GDPR itself already promises swingeing fines and there is every chance that prominent and perhaps not-so-prominent companies and organisations may be made examples of with some headline-grabbing penalties. Suppliers of Managed Services will have many new responsibilities and may well find themselves in the firing line as the legal implications take their courses.
This is the main point behind the latest speaker announcement for the European Managed Services & Hosting Summit 2018, to be held in Amsterdam on May 29, 2018. A full session will be devoted to the issue of working across the rising tide of compliance in Europe. With all indications that many MSPs are looking to expand by partnering or acquiring operations in other geographies, this will be an essential item for discussion at senior levels. Any senior figure in a managed services company will need to be familiar with both the processes and implications of the new levels and nature of compliance requirements in any territory they are working in, and beyond.
GDPR is not the only game in town, says Ieva Andersone, a senior associate from Sorainen, a major legal firm in the IT industry, based in the Baltics, a region with a high degree of interest in pan-European business relations and one of the fastest growing regions in IT generally and in managed services adoption. Parts of Europe, even parts of countries, will have their own local rules or GDPR interpretations, she will argue, which managed services companies will need to be aware of, and which may well apply to IT projects with connections outside their core territory. As an experienced, Cambridge-educated lawyer working in multiple cultures and markets in Europe, her presentation discusses the nature of the regulations, their intentions and direction and how they may affect suppliers of services, including managed services, in unexpected ways.
With plenty of discussion points on how to keep the MSP business on the right side of the law, and with guidance as to strategies to adopt, the annual Managed Services and Hosting Summit (MSHS) on May 29 in Amsterdam always aims to use experts to advise European MSPs on these major issues. The first keynote presentation, from Gartner, will address the key issue of how MSPs can differentiate themselves in an increasingly competitive market.
This MSHS event offers multiple ways to get answers: from plenary-style presentations from experts in the field to demonstrations; from more detailed technical pitches to wide-ranging round-table discussions with questions from the floor. There is no excuse not to come away from this key event with ideas for a strategy to keep the business out of trouble.
One of the most valuable parts of the day, previous attendees have said, is the ability to discuss issues with others in similar situations, and attendees are all hoping to learn from direct experience.
In summary, this is a management-level event, held in English, designed to help MSP and channel organisations identify opportunities arising from the increasing demand for managed and hosted services and to develop and strengthen partnerships, while keeping up with the latest compliance and legal requirements in multiple markets.
Registration is free-of-charge for qualifying delegates - i.e. director/senior management level representatives of Managed Service Providers, Systems Integrators, Solution VARs and channels. More details: http://www.mshsummit.com/amsterdam/register.php
The next Data Centre Transformation events, organised by Angel Business Communications in association with DataCentre Solutions, the Data Centre Alliance, The University of Leeds and RISE SICS North, take place on 3 July 2018 at the University of Manchester and 5 July 2018 at the University of Surrey. The programme is nearly finalised (full details via the website link at the end of the article), with some some top class speakers and chairpersons lined up to deliver what is probably 2018’s best opportunity to get up to speed with what’s heading to a data centre near you in the very near future!
For the 2018 events, we’re taking our title literally, so the focus is on each of the three strands of our title: DATA, CENTRE and TRANSFORMATION.
This expanded and innovative conference programme recognises that data centres do not exist in splendid isolation, but are the foundation of today’s dynamic, digital world. Agility, mobility, scalability, reliability and accessibility are the key drivers for the enterprise as it seeks to ensure the ultimate customer experience. Data centres have a vital role to play in ensuring that the applications and support organisations can connect to their customers seamlessly – wherever and whenever they are being accessed. And that’s why our 2018 Data Centre Transformation events, Manchester and Surrey, will focus on the constantly changing demands being made on the data centre in this new, digital age, concentrating on how the data centre is evolving to meet these challenges.
We’re delighted to announce that Adam Beaumont, Visiting Professor of Cybersecurity at the University of Leeds, and CEO of aql, will be delivering the Simon Campbell-Whyte Memorial Lecture. Has IT security ever been so topical? What a great opportunity to hear one of the industry’s leading cybersecurity experts give his thoughts on the issues surrounding cybersecurity in and around the data centre.
We’re equally delighted to reveal that key personnel from Equinix, including MD Russell Poole, will be delivering the Hybrid Data Centre keynote addresss at both the Manchester and Surrey events. If Adam knows about cybersecurity, it’s fair to say that Equinix are no strangers to the data centre ecosystem, where the hybrid approach is gaining traction in so many different ways.
Completing the keynote line-up will be John Laban, European Representative of the Open Compute Project Foundation.
Alongside the keynote presentations, the one-day DCT events will include:
A DATA strand that features two workshops - one on Digital Business, chaired by Prof Ian Bitterlin of Critical Facilities and one on Digital Skills, chaired by Steve Bowes Phipps of PTS Consulting.
Digital transformation is the driving force in the business world right now, and the impact that this is having on the IT function and, crucially, the data centre infrastructure of organisations is something that is, perhaps, not as yet fully understood. No doubt this is in part due to the lack of digital skills available in the workplace right now – a problem which, unless addressed, urgently, will only continue to grow. As for security, hardly a day goes by without news headlines focusing on the latest, high profile data breach at some public or private organisation. Digital business offers many benefits, but it also introduces further potential security issues that need to be addressed. The Digital Business, Digital Skills and Security sessions at DCT will discuss the many issues that need to be addressed, and, hopefully, come up with some helpful solutions.
The CENTRES track features two workshops on Energy, chaired by James Kirkwood of Ekkosense and Hybrid DC, chaired by Mark Seymour of Future Facilities.
Energy supply and cost remains a major part of the data centre management piece, and this track will look at the technology innovations that are impacting on the supply and use of energy within the data centre. Fewer and fewer organisations have a pure-play in-house data centre real estate; most now make use of some kind of colo and/or managed services offerings. Further, the idea of one or a handful of centralised data centres is now being challenged by the emergence of edge computing. So, in-house and third party data centre facilities, combined with a mixture of centralised, regional and very local sites, makes for a very new and challenging data centre landscape. As for connectivity – feeds and speeds remain critical for many business applications, and it’s good to know what’s around the corner in this fast moving world of networks, telecoms and the like.
The TRANSFORMATION strand features workshops on Automation (AI/IoT), chaired by Vanessa Moffat of Agile Momentum and The Connected World. together with a Keynote on Open Compute from John Laban, the European representative of the Open Compute Project Foundation.
Automation in all its various guises is becoming an increasingly important part of the digital business world. In terms of the data centre, the challenges are twofold. How can these automation technologies best be used to improve the design, day to day running, overall management and maintenance of data centre facilities? And how will data centres need to evolve to cope with the increasingly large volumes of applications, data and new-style IT equipment that provide the foundations for this real-time, automated world? Flexibility, agility, security, reliability, resilience, speeds and feeds – they’ve never been so important!
Delegates select two 70 minute workshops to attend and take part in an interactive discussion led by an Industry Chair and featuring panellists - specialists and protagonists - in the subject. The workshops will ensure that delegates not only earn valuable CPD accreditation points but also have an open forum to speak with their peers, academics and leading vendors and suppliers.
There is also a Technical track where our sponsors will present 15 minute technical sessions on a range of subjects. Keynote presentations in each of the themes together with plenty of networking time to catch up with old friends and make new contacts make this a must-do day in the DC event calendar. Visit the website for more information on this dynamic academic and industry collaborative information exchange.
Every business wants to report higher levels of customer satisfaction and operational efficiency, and DevOps is one of many initiatives that organisations turn to, to help deliver this.
By Marianne Calder, VP EMEA, Puppet.
However, perhaps the more under-represented initiative to date is leadership style. A good leader helps teams to work better – delivering better products and solutions – and in turn improving the business productivity and success. And if the promise of better results isn’t enough – by 2020, half the CIOs who have not transformed the capabilities of their teams will be removed from their digital leadership positions.
Key to this is supporting willingness to experiment and innovate within a team, as well as establishing and supporting improved business cultural norms, and implementing technologies and processes to enable developer productivity.
Engaged leaders are essential for a successful DevOps culture, which is something that many businesses recognise overall. However, the DevOps community has been guilty of vilifying its leadership – setting themselves up for challenges by having middle managers who prevent or delay the necessary changes for the business to improve its IT and organisational performance. In DevOps, as much as anywhere else (and possibly more so), it is important to have the right leaders, with the right attitudes, directing the business. Leaders typically have the authority and budget to make the large-scale changes needed to make DevOps transformation a business reality – set the tone of the organisation, reinforce desired cultural norms and provide visible support when the transformation is under way.
What makes a good DevOps leader?
DevOps success is far easier to achieve when there are effective leaders supporting the team – despite infrequent reports of successful grassroots DevOps initiatives. This is where the role of transformational leadership naturally emerges.
Leaders who follow a transformational leadership style inspire and motivate their teams to achieve higher performance by appealing to their values and sense of purpose. This seems to resonate particularly well with a millennial audience – a growing proportion of the workforce, who are driven by purpose as much as financial gain. Exemplifying this: 86% of this demographic would consider leaving an employer whose values no longer met their expectations according to the PwC Millennials at Work report. In doing so, it facilitates wide-scale organisational change. The model is essential for:
Teams that work towards a common goal are best led by a leader through their vision, values, communication, example-setting and evident caring about the team’s personal needs. Transformational leaders use these attributes to help the team create a personal interest in the needs and objectives of the organisation as a whole.
These characteristics are also highly correlated with IT performance. High-performing teams usually have leaders with the strongest behaviours across all dimensions: vision, inspirational communication, intellectual stimulation, supportive leadership, and personal recognition. The positive (or negative) influence of leadership flows all the way through to IT performance and organisational performance.
Building a bed-rock for success
An important philosophy of forward-looking executives is DevOps. These executives acknowledge the link between DevOps and high organisational performance. As more of a culture and processes than a technology, management on all levels (even outside of the core IT team) needs to understand what it means to adopt DevOps practices as business practices.
Companies cannot practice or buy DevOps. It is a culture where continuous questioning, experimenting and learning is a part of everyone’s day-to-day activities. Transformational leadership builds the core foundation for this culture to thrive – a culture where it is everyone’s job to challenge the status quo, continually bettering the organisation, its developments and practices.
A team who constantly reinvents and improves the organisation is a great asset to any business. It prevents complacency and stasis internally which can be the death of a business in the modern fast-moving world. However, creating this kind of team requires a leader who specialises in inspiring team cohesion, supporting effective communication, and delivering high performance to achieve key business outcomes. A transformational leader may not be a turnkey quick-fix solution, but is invaluable in providing the stable foundations for a successful DevOps team and, in time, prove a business’s digital leadership.
Useful wisdom for life and enterprise IT strategy.
By Jim Crook, Director of Marketing, CTERA Networks.
I’m not much for life advice, but I am an enthusiastic proponent of enterprise file services that make your organisational data accessible and secure. And an important aspect of enterprise file services is ensuring the uptime of users and offices around the world, with proper methods in place to recover from any scenario: fire, hardware failure, or, increasingly, a ransomware attack.
It’s why more and more organisations are looking to cloud-based disaster recovery options that can help them recover instantly and easily from any productivity-killing office outage and ensure their users are uninterrupted. The Carlyle Group, one of the world’s largest private equity firms, uses cloud-based technologies to ensure the uptime of dozens of offices and the productivity of thousands of users around the world. Similarly, global entertainment leader Live Nation modernised its branch disaster recovery infrastructure with modern, cloud-based DR tools that maintain a secure business continuity agenda.
The move to the cloud for DR purposes has been driven by limitations of legacy approaches, which can be complex, slow, costly – or some variation of all three. An outage of a branch office file server, for example, can mean that business halts until perhaps a tape backup is successfully restored (fingers crossed) or a replacement server arrives.
But modern cloud failover approaches now almost all include a hybrid optionality in which organisational data and storage infrastructure can reside on-premises while also having data replicated to the cloud. Carlyle Group users, for example, can access their office files through a hybrid backup appliance that also serves as a file server; any changes to files stored on the appliance are securely synced to the replicated files stored in the cloud.
If that local appliance ever becomes unavailable due to a disaster scenario, Carlyle IT administrators can redirect network drives to the cloud’s version of files. With file reads and writes seamlessly redirected to the cloud, users might not even notice the change, and the office doesn’t come to a standstill while you reboot your gateway or otherwise fix the situation.
As a result of organisations adopting cloud for DR and other use cases, users have essentially ubiquitous access to a file server using both traditional methods (like CIFS/SMB) as well as modern methods (like EFSS). Until recently, managing file system structures and user file access permissions (via ACLs) both locally on file servers and in the cloud was challenging, but it’s an important consideration of disaster recovery. (ACLs (Access Control Lists) are used by operating systems to restrict access permissions (read, read/write, execute, etc.) of a folder or file to certain users or groups. NT-ACLs are ACLs used by the Windows family of operating systems, including server and consumer editions since 2000.)
Secure cloud failover technologies are NT-ACL-aware, and enforce NT-ACLs in both the cloud and the office. This means users have the exact same file access and file shares in the cloud as they did locally. Why does this matter? Let’s say an IT person makes a change to a user’s access permissions on an office’s local file server that restricts the user from seeing confidential material. Because the file permissions for the cloud are managed separately from the Active Directory/NT-ACL permissions, the local permissions change will not be reflected in the cloud, and the user potentially can have access to sensitive data if he or she is accessing files through the cloud in a DR scenario.
What the cloud is bringing to these organisations is essentially zero office downtime. Offices with traditional file server infrastructure can experience hours, days, or even weeks of downtime as they ship new hardware to an affected office, restore backups, etc. Offices with cloud failover can resolve these issues instantaneously: global IT admins can instantly redirect users to the cloud, ensuring uninterrupted office operations and user productivity.
Cloud failover also means significant cost savings and fewer headaches, as there’s no need to deploy and manage expensive storage appliances running in parallel on-site, or maintaining a tape backup library. The economics of the cloud and cloud-enabled infrastructure enables organisations to minimise TCO of not only DR components, but overall storage spend as well.
In addition to rapid recovery during gateway outages, modern office appliances are architected to ensure complete file availability even during internet connection outages. By maintaining a copy of the file system at the edge of the network – on the appliance – organisations can access and collaborate on files in the event of an internet outage. Once the internet connection is restored, any file changes made will be synced between the office and the cloud.
Cloud-based DR strategies are an effective means of improving office productivity and increasing the efficiency of your branch storage infrastructure. I’ll leave you with a few parting links and some bonus wisdom from Ben Franklin that applies to life and IT disaster recovery: Lost time is never found again.
Break down internal barriers to deliver highly tailored products and services.
By Colin Masson, global industry director of manufacturing solutions at Microsoft.
It’s easy to get obsessed with technology – it is the key enabler of today’s revolution – but the harder part of the transformation is cultural.
It’s about putting the customer experience and their business outcomes at the centre of everything. That means realigning engineering, manufacturing and the supply chain around delivering a world-class sales and service experience. It means new thinking about optimising customer satisfaction and loyalty metrics such as Customer Satisfaction Scores or Net Promoter Scores rather than production efficiency.
The list of technologies that can help this realignment is endless. Manufacturers should be investing in, or at least exploring, the internet of things (IoT) and industrial automation, cloud, big data, artificial intelligence (AI), robotics, 3D printing and more. Microsoft partner Columbus recently published a major industry report exploring how these emerging technologies fit into the wider digital transformation push of manufacturers, as part of a ‘Manufacturing 2020’ strategy.
But at the heart of the revolution is data. We are putting telemetry on everything, creating a data-driven culture with a single version of the truth. Fundamentally the fourth industrial revolution is being powered by the ubiquity of IoT data coming from sensors in the factory combining with data pouring in from the outside world, such as the wealth of information being generated by smart cities, smart buildings, smart offices and even connected cars. Choosing an IoT platform is a big decision; start by identifying one that can match the scale of your ambitions.
There is another convergence that is driving business transformation. Inside the firm, the digital technologies used by IT, operations and engineering are converging. By embracing the digital transformation, manufacturers are empowering employees to be more productive in modern workplaces with apps and intelligent working methods such as the use of cobots, where employees and robots co-operate “shoulder to shoulder”.
It’s also about optimising operations through smart factories and supply chain solutions powered by intelligent edge and cloud. It means the transformation of products and business models, using insights from smart connected products, advances in modelling such as “digital twins”, and more agile end-to-end business solutions.
We see manufacturers, and individual businesses within manufacturing organisations, at various stages in their journey to servitisation, transforming products into services. Some are driving more customer engagement through traditional call centres or differentiating their product through (sometimes IoT-connected) field service. Increasingly, though, we are also seeing the transition to full “product-as-a-service”, where they sell flying hours instead of jet engines; car coatings rather than paint; water savings rather than treatment plants; and cleaning services rather than cleaning chemicals.
This journey requires that they break down the silos between internal systems such as ERP, CRM, PLM, and SCM. Instead, they need to connect “things” – people, data and processes – with more agile systems of intelligence that can keep pace with the new speed of business inherent in delivering highly tailored products and services.
Manufacturers need smart factories that can make their smart products and be at the core of much more agile supply chains. They also need intelligent shop floor solutions and business apps that augment people and address the growing skills gap in manufacturing.
IoT platforms are a key enabler, yes. But we also need big data and AI on top to provide the insights that line workers and business decision-makers need. We need both intelligent cloud and intelligent edge technologies to power robots and cobots in the factory of the future.
Big data also needs big compute to accelerate the product innovation unleashed by enhanced insights into customers, enabled by the ability to iterate through digital twins of devices, product designs, supply chains, and customer usage in digital cities. Can your legacy ERP, CRM, PLM and SCM systems keep up with the new speed of business?
At the heart of this digital world, however, lies the simplicity of customer insight. Whether you’ve got a smart product that can beam back data on customer use, or you use traditional client engagement channels, it’s those insights that will differentiate your future products and services –and decide the success or failure of your digital transformation.
Everything you need to know to take advantage of the tech.
By Martyn Davies, director, Rocket Software.
The digitisation process has been taking place for many decades now; we’ve seen books turned into Kindle e-books and CDs turned into Apple Music audio files. Looking to the business world, the manufacturing industry has been the first to reap the rewards of similar digitisation. While not in precisely the same vein as the consumer applications, manufacturers across the globe have adopted digital twinning technology to take existing physical assets and create an exact digital replica. For example, major brands such as GE and Siemens are developing virtual copies of a range of business assets. The value of digital twinning is also at the start of a product’s life; in the design stage, manufacturers are employing the technology to build virtual and then physical prototypes, and lay the ground work for future products.
Digital twining has grown in importance for manufacturers as the technology has demonstrated that it helps to improve the development, efficiency and profitability of high value capital assets, such as production line robotics or power station infrastructure. By seeing exactly how a project will look before any building or implementation work has begun, adjustments can be made to improve the real-life outcomes. This can allow project teams to understand where potential implementation failures might occur, whether that’s within a piece of hardware, infrastructure or software. Moreover, a virtual version of your project can be used to generate real-time insight into how it’s progressing, to ensure timely responses to any issues, which helps to reduce costs and increase operational efficiency.
However, the process does present a significant challenge for organisations. In particular, it creates a great deal of data. As IT leaders are already well aware, the growth of data-generating technologies is putting significant pressure on companies when it comes to information processing, storage and management. Digital twinning will add to these pressures. Businesses need to pay attention to their current capacity to store and manage data if they are to take advantage of the technology.
How do digital twins work?
To get the most from digital twin technology, we first need to understand how it works. Essentially, it is an application of IoT; using smart components with in-built sensors, which are integrated with physical items. The data collected by sensors includes everything from how well certain physical assets or products are working.
Once all this data is processed, it can be mapped out and analysed within the context of the wider business, to provide decision-makers with valuable information about where efficiency can be gained or bottlenecks are being created.
Digital twins in real life
There are numerous applications that already exist which are not a million miles away from digital twinning. For around 30 years, engineers have used 3D modelling, analysis and process simulation, ranging from consumer products through to complex space craft.
In the airline industry, engineers are already testing virtual models of planes to predict when parts will start to malfunction. Anticipating problems with the planes means companies can swap out parts in advance and minimise the ground time of the fleet. This is crucial to airline profits given that every minute a plane is on the ground, it loses money. With the most basic commercial airplanes selling upward of $82m or leasing for around $375,000 a month, they need to be flying to make the investment pay off.
Over in the green energy sector, GE is using digital twins to get a holistic picture of major developments such as wind farms. A virtual copy of each turbine informs how they will work together as a configuration. When the wind farm is up and running, engineers can then monitor and control the turbines, predict power output and adjust settings accordingly. GE aims to generate 20% efficiency gains from the data generated by digital twins.
2018 and beyond
The success of digital twins in manufacturing to date suggests that we’ll see a much wider adoption of the technology across other sectors. In fact, Gartner expects half of large industrial companies to be using digital twins by 2021. Additionally, IDC has predicted that billions of things will have digital replicas within 5 years. This will go a long way towards helping product developers and data scientists collaborate, and ultimately will break down siloes that are impeding business efficiency.
AI and machine learning within DevOps is still in its infancy, despite the technology being readily available and an increase in vendors driving awareness of its benefits within a DevOps-minded enterprise.
By Nigel Budd, Head of Client Services (EMEA), Clearvision CM.
Early adopters tend to be the more mature DevOps organisations that are using the benefits of AI within their established software engineering culture and organisational feedback loop to not only monitor more data but to gain greater ‘intelligence’ from its analysis.
We do expect uptake among smaller-scale DevOps enterprises to increase in the coming year and it’s certainly something that we’re advocating as a beneficial part of the DevOps mindset/toolchain.
At the heart of the DevOps movement is the ability to quickly gather and act on feedback gained from lots and lots of data. And this isn’t just about fixing bugs or stability of the product – it’s about adding value by really understanding the features customers are using and applying this learning to future growth strategies.
With the fast analysis of a huge amount of data having such importance within a DevOps culture, machine learning is great additional benefit – the faster and more intelligent the feedback the faster the value from new code or features can start adding to the bottom line.
So for us the opportunity is how AI can aid the effectiveness of the overall product as well as introducing exciting new innovations.
At the moment its application is generally very data led focusing on the efficiency of the code, but it can go further by looking at the whole customer experience. Do people like the feature; are they using it? Given that 70 per cent of features are not delivering any actual value to their organisation, by using AI businesses can ‘intelligently’ focus their DevOps around features that will add more worth as well as help the business to operate more efficiently.
Then it can add value within a continuous innovation culture. Again, the idea behind the DevOps culture is to improve efficiency, freeing up time to develop useful features and importantly for experimentation. It’s the experimentation within a business that can really help to deliver a competitive edge.
Adding greater intelligence to new digital innovations can not only speed up the process but might also help to deliver more powerful insight than more low tech analysis. Again within a DevOps environment – the whole organisation be part of this insight and act on it, adding the human element to the learnings, all likely to result in faster innovation and an all round better finished product.
The other major benefit we’re seeing in relation to AI and DevOps relates to security.
Security is obviously an ongoing and within DevOps it’s critical. Some vendors have started using the term DevSecOps to highlight how security is becoming an integral part of the DevOps process.
With AI there is an opportunity for machine learning to identify what normal looks like, then to spot abnormal behaviours. These ‘abnormal traits’ might not just relate to the behaviour of the software, but also to spot abnormal behaviours such as hacking or other attacks. An example of this could be access from an unexpected geographic location, unusual times of day, increased attempts at 'guessing' passwords for a large number of accounts from the same location.
All of these things could be detected by non-AI tools, but they would be back to threshold limits being broken so can be fooled, or even triggered unnecessarily or accidentally.
Barriers to greater implementation of AI within DevOps will be the cumbersome nature of the DevOps movement overall. There is undoubtedly a buzz around DevOps and a desire – but we know that the sticking point is still the cultural and business-wide change in ways of working required to enable the shift to happen. Many start-ups are naturally adopting this way of working, as they can engender change much more easily than larger organisations that have long-standing methodologies, silos and processes.
Applying AI in DevOps of course requires a financial investment in the right tools and an element of training, but businesses will undoubtedly begin to see a fast return – not only on efficiencies but also on effectiveness. And vendors, such as Splunk, are keen to support its integration with training provision. There will be some financial advantages of getting in early – not only for the competitive edge applying AI to DevOps can give but also while vendors are keen to get more users talking about their tools.
My only word of caution around using AI as part of the DevOps process is that businesses don’t forget the importance of the human element or rely too much on the tech. Beware of artificial stupidity! AI still requires governance by ‘experts’ and sense checking – it is software at the end of the day and you could drift away from a normal you actually want when any human element is taken out.
Also, a machine can only be as good as the information we feed it - and poor data, or worse poorly understood data, can lead to poor outcomes.
DevOps and AI are both not only buzz words, but they’re both movements that are going to gather significant pace during the coming year.
Fuelling this is the recognition that collaboration and continual development based on learning is now a necessity not something for ‘forward thinking organisations’. Vital within this is the ‘space to innovate’ – driving efficiency through DevOps and AI can provide this time.
I hope we begin to see more case studies on AI’s role within the DevOps process – this again will help to fuel its uptake and bring more businesses to join the DevOps movement.
And as a final point, it’s worth considering how will AI affect the lives of developers?
Last year AI started making headlines by automatically writing a financial report. Experts couldn't tell the difference between the AI produced report than one written by a human financial adviser.
This same kind of machine learning technology is starting to be investigated as a way of writing code. Sites such as GitHub have millions of lines of open-source or publicly accessible code that can be analysed by the machine learning tools. When asked to write some code for a purpose, can then re-use a lot of this insight to develop an application automatically.
Combine this with some kind of natural language processing such as Alexa, Siri, or Google and you can easily imagine a future where you describe your requirements verbally, and the first draft of the application gets automatically created, these would be created from high-quality code snippets, secure, with low numbers of bugs - well assuming you can describe what it is you want well enough.
Developers and testers might still be required to finesse' the application, and perhaps tailor it to the customers precise needs, but there are lots of common patterns of application such as databases, web-sites, and dashboards that could be cheaply and quickly produced.
The modern business landscape is seeing a widespread of adoption of the cloud thanks to the many benefits that the cloud offers. Objections and concerns around moving to the cloud, such as security, continue to wane.
By Francois Cadillon, Vice President UK & Ireland, Microstrategy.
Historically they were barriers to adoption. Organisations are identifying and prioritising their IT and business initiatives that might benefit the most from a cloud deployment, and business intelligence and analytics are among those key initiatives. In a joint Deloitte, EMA, and Informatica State of Cloud Analytics Report survey, 70.1 per cent of the respondents said cloud is already an essential part of their analytics strategy and it is predicted that cloud will be the preferred delivery mechanism for analytics in the next few years.
The challenges of deploying and managing an enterprise-scale cloud analytics infrastructure can, however, prove to be a stumbling block for some organisations. The key to overcoming this is carefully choosing the right cloud-management platform, allowing businesses to dramatically reduce setup and configuration times from days or weeks down to minutes. It can also reduce time to value and allow IT to focus on core tasks that provide more value.
By adopting a few general guidelines and adapting them to fit individual business needs, the inherent value of running analytics in the cloud can be maximised. Platform management tools should be used to their full advantage to reduce ongoing costs, once cloud-based analytics are in place. In practice, optimising cloud platforms for cost-efficient analytics involves three steps:
Step 1: Consider the IT department separately
While it’s important to equip business users with the ability to meet their own requirements, it’s also necessary to recognise that more technical users have separate needs that may be more complex. Putting users at the centre of any new process initiative will always be beneficial in ensuring the best results. Therefore, good practices should include providing mechanisms for IT and other technical users to perform advanced tasks as efficiently as possible.
For example, administrators should be empowered by visual platform tools to streamline processes such as creating, deploying, and managing cloud environments. In addition, those users should have the flexibility to tailor the environment further using scripts, code, APIs, and SDKs. This ensures that the experience can be highly personalised to the individual. In particular, those measures should be usable at the cloud-environment level as well as the platform level.
Step 2: Employ automation where possible
For many organisations that currently operate on-premises analytics infrastructures, complexity and cost can prove to be barriers to the cloud. This can mean that moving those systems to the cloud can be slow and laborious, resulting in a delayed migration. In an ideal world, a centralised cloud platform should be able to copy an existing environment to a single backup file and then seamlessly import and deploy a duplicate environment to the cloud – all as a single operation. The need to manually move objects one by one or to stand up additional instances for the purposes of migration can be relieved by that level of automation. Choosing a cloud-management platform that provides such functionality can substantially reduce staff time and company spend.
Step 3: Empower Business Users
Speed and the ability of business users to put new processes or capabilities in place without involving IT are at an especially important to organisations. Technology can help to plug the skills gap amongst users through ‘self-service’ applications. These mechanisms can eliminate roadblocks and boost productivity by allowing non-technical users to create and share dashboards and reports without having to write code or receive help from IT. Positive effects on business innovation and improved user experience can be achieved, while also encouraging greater collaboration among users.
To take full advantage of self-service IT in cloud-based analytics, organisations should first investigate the ability of their cloud-management platform to provide end-to-end business intelligence functionality. In addition, the ability of those platforms to automate distribution of personalised reports, documents, and dashboards throughout the enterprise should be placed at a premium. Making those resources available to everyone who needs them increases the value and flow of the information.
Organisations cannot afford to ignore the flexibility, agility, ease of use and incomparable cost savings that the cloud offer. Companies opting for this path are better prepared, not only for growth and international expansion, but also for the modern demands of employees, which includes the ability to work remotely and access systems from any location. Cloud-based analytics is the next logical step to maximise investment and discover new insights, as the volume of data in the cloud grows. Therefore an analytics solution that delivers scalability and speed, agility and governance, and convenience and security is crucial to delivering this.
Deliverig improved customer service and security through analytics.
By Mathias Golombek, CTO, Exasol.
In an increasingly competitive market, businesses are taking their lead from their actual customers rather than trying to model theoretical consumer behaviour. It’s now vital for companies to listen to their customers in order for them to improve their market share. Increasingly, the best way for companies to do this is through data and data analytics.
This renewed reliance on data, means it is now crucial to rethink and sometimes recreate existing strategies. As businesses seek to get smarter with this asset, Analytical Data Infrastructure (ADI) has become an increasingly significant topic of conversation. Organisations are moving to a place where business-oriented ‘data strategies’ are the major focus.
This trend is demonstrated in the 2018 Global Dresner Market Study for Analytical Data Infrastructure (ADI) which revealed key business priorities of data analytics and business intelligence.
The Dresner report also explored the ways in which end-users are planning to invest in ADI technology, along with the considerations behind implementation and use-cases. Security and performance emerged as the top two priorities for businesses. However, the report also showed that the biggest year-on-year change was in the growing importance of easy access to and use of analytical features and programming languages. Examples of these include R, Machine Learning technology and MapReduce analytics.
Unquestionably, businesses are increasingly aware of value in their data. The right tools mean they can extract that value more effectively, improving the way they understand and sell to their customers as well as through streamlining business processes and reducing costs.
Often this data has to go through a process of cleansing and extraction before it can be transferred to other systems. Modern analytics platforms combine Business Intelligence competence centres and Data Science teams, allowing them to proceed with -based data analytics, map reduce algorithms and data science languages such as R or Python.
Cleansing the data and finding the right models can be repetitive, but high performance in-memory computing can make a vast difference when applying created R or Python models to billions of user data, in near-real-time.
The evolution of data science technology will enable data storage, standard reporting and data processing in an open, extensible platform. Companies will then move to create insights, predictions and automated prescriptions out of all kinds of data.
Alongside the desire for improved analysis, the report showed an increased awareness of security and performance. This is understandable given the GDPR’s imminent implementation. From May companies will need to ensure they are confirming to the new rules. Penalties will be much more severe, as significant as up to €20 million (£17 million) or four per cent of global annual revenue, whichever is highest.
The potentially severe financial implications of future data breaches will weigh on business leaders’ minds for the foreseeable future, especially given recent high- profile cases. The security failings of Carphone Warehouse present a stark warning. It’s 2015 security failure meant it was recently given a penalty of £400,000, 80% of the maximum fine the UK Information Commissioner’s Office (ICO) can currently levy. Under the GDPR, this sizeable figure would rocket up to £312 million, assuming the same 80% ratio of the maximum fine that could have been levied. For a smaller business, a similar ratio fine could be devastating, potentially bankrupting a firm.
Aside from the financial implications, data losses will undoubtedly create and leave a poor reputation. Regardless of how long lasting this is, it could be just as devastating an effect as a fine.
Reversing bad habits and rolling out new regimes can be lengthy, costly and disruptive, if the right tools and expertise are not in place. Yet big data and advanced analytics, mean security can be actively approached to pre-empt any potential cyber threats thereby ensuring businesses are always prepared.
Better analytics mean you can better evaluate data for trends and interesting behaviour. Unusual patterns in data could be a sign of suspicious activity, which could then be flagged and dealt with before a serious issue arose. Experienced ADI capabilities will help to identify these irregularities, it is therefore crucial to roll out an ADI with the capability to identify, analyse and flag up possible security threats as fast as possible.
Organisations are prioritising the performance of business intelligence and IT departments. The desire of companies to ask increasingly complex questions of their data, in order to make more meaningful business decisions means they can no longer rely on ADI that is just about sufficient.
ADI that is fit for purpose allows businesses to gain rapid insights into their data and implement changes to the performance of their business at a greater speed. Price has decreased in priority as businesses recognise the value of data analytics can bring. Competing in a data-driven world means investing in the right infrastructure – with speed, performance, ease of use and scalability – is paramount.
The Dresner research suggests that a complex mix of regulatory and competitive pressures are combining to press organisations to become more sophisticated and careful with data. Bringing together disparate sources of data can help unlock the value held by data analytics. These insights can then be used to drive business benefits and efficiencies, as well as ensuring any data is as secure as possible.
Increased awareness of the GDPR pushing businesses in the right direction, prompting them to seek ways to protect their data. It is certainly a welcome shift in the right direction for businesses, the data analytics ecosystem, as well as those affected by possible security and data breaches.
The Internet of Things is offers a wealth of opportunity for the telecoms industry. It presents mobile operators a chance to develop and enhance their consumer offerings and increase market growth.
By Robin Kent, Director of European Operations, Adax.
Research suggests the industry will grow from $900 billion in 2014 to $4.3 trillion by 2024[i]. We’ve already seen the likes of Vodafone delve into the consumer side of IoT with the launch of its new “V by Vodafone” bundle, whereby consumers are charged for the number of connected devices they add to their monthly plan. However, alongside this raft of growth and opportunity comes the heightened risk of security breaches.
Operators need to be smart with their investment when it comes to IoT. It’s all well and good chasing new sales leads and initiatives, and reaping the rewards, but security needs to be high, if not at the top, of their agenda. More than 30 billion connected devices will be in use by 2025, of which cellular IoT—including 2G, 3G and 4G technologies – is forecast to account for about seven billion units[ii]. With the increased number of devices accessing the core network, operators need to ensure they plan for the worse and have prevention measures in place for possible hijackers. The repercussions of such a breach can have serious consequences for both the operator and end user, as any device hijack can be a potential entry point to the network for an attack.
Security attacks can come in all different shapes and sizes. One of the more common breaches is the “man-in-the-middle” concept, whereby a hacker is looking to interrupt and breach communications between two separate systems. This attack can have severe consequences as the hacker secretly intercepts and sends messages between two parties when they are under the belief that they are communicating directly with each other. Following this, the hacker can trick the recipient into thinking they are still getting a legitimate message. These attacks can leave the networks, and end-users, in a position of extreme vulnerability with regards to IoT, due to the nature of the devices being hacked. For example, these devices can be anything from industrial tools, machinery or transportation to innocuous connected "things" such as smart TV's or connected fridges.
Another common treat posed to IoT networks are denial of service (DoS) attacks. There can be a host of reasons for the network being unavailable, but it usually refers to infrastructure that cannot cope due to capacity overload. In a Distributed Denial of Service (DDoS) attack, a large number of systems maliciously attack one target. In comparison to hacking attacks like phishing or brute-force attacks, DDoS doesn’t usually try to steal information or leads to security loss, but the loss of reputation for the affected company can still cost a lot of time and money. Often customers also decide to switch to a competitor, as they fear security issues or simply can’t afford to have an unavailable service.
To tackle these issues, it’s paramount that access to the IoT devices for the applications should be through a controlled and secure environment that first authenticates and authorizes the user/application before allowing access to the core. The first step for operators is to ensure any connection from the IoT device to the core network over S1 and Gb interfaces is fully authenticated. In order to do this, they must invest in and revisit the capabilities of their GTP and SCTP protocols, which will handle the hundreds of connections into the core network. Authentication can be delivered by the RFC 4895 for the SCTP protocol without compromising performance or network monitoring visibility like IPsec/VPNs do. This can prove vital as networks are subject to attacks with greater frequency and demonstrated disastrous outcomes.
Alongside a highly reliable SCTP protocol, operators should implement a DTLS module. Such a solution gives operators peace of mind that eavesdropping and network tampering is dealt with, as well as helping detect and fix real-time connection failures, redundancy and fault tolerance for signaling applications and improved destination and peer path failure. In addition, it can also resolve the issue of bottlenecking in networking due to Diameter signaling, by allowing the Linux host to provide thousands of associations and connections.
It’s clear that the IoT provides a wealth of business and marketing opportunities for operators. But to ensure it’s not a short-lived fad, security must be taken seriously. Attacks on the networks can have detrimental impacts on both the operators, who can have their reputation diminished in seconds if vulnerabilities are publicised, and end-users, whose devices, and therefore livelihoods, are at risk. Now is the time for the industry to lay down the foundations and realise the tools and protocols needed to secure the future.
Biometrics have been part of the mainstream on many mobile devices ever since Apple introduced fingerprint biometric authentication to its iPhones in 2013. With facial recognition technology now available on many of the current generation of devices, this trend looks set to continue. Yet according to WatchGuard Technologies many consumers don’t even use a passcode to lock their phones[1].
By Ojas Rege, Chief Strategy Officer, MobileIron.
Even those who do use passcodes or passwords are far from secure, however. Recent research from Newcastle University found that PINs and passwords can be found just by watching how a phone moves when it is being held[1]. Far more likely, however, is that a user’s password will be stolen or guessed.
Authentication hinges on whether a third party can gain access to your ‘credential’, and the less likely that is to happen, the more secure the system is. The facial recognition market is expected to grow to USD 8.93 billion by 2023 from USD 3.4 billion in 2016[2], driven by the expanding criminal activities globally, suggesting that this method is currently considered one of the strongest by those in the know.
Beyond Security
Security isn’t the only concern, however. There are essentially three requirements for an authentication method to be successful: it has to work all the time; it has to exceptionally hard to spoof, and it has to be easy to use. Facial recognition has the potential to do all three but is still early in the adoption cycle.
The technology is currently accurate enough to be used by consumers but it does come with some usability challenges. Because a camera is involved, phone angle and lighting matter, so it is more complex for an end user than just placing a finger on a touch pad, as with more established biometric authentication methods.
For an enterprise considering switching from another system to facial login, from a PIN or pattern on their phone or a password for example, all these factors are bound to be a consideration. The Mobile First era is all about the end user and If employees can’t easily access their devices then productivity and morale is bound to suffer.
Privacy matters
While ease of use is undoubtedly important, however, privacy and security protocols can’t be underestimated. The absolutely essential security and privacy question is "how does the phone store and secure that data and can anyone else get access to it?" Phone and operating system manufacturers have done a good job of answering this question but unfortunately it is a question that users rarely ask and businesses often overlook.
With the recent attention on Cambridge Analytica and Facebook, I expect the attention to privacy across both the user community and the governmental level will continue to increase. Don’t ignore this issue – be an informed consumer and make sure you feel comfortable about how the biometric data is stored and used.
Embracing the biometrics landscape
Biometrics are set to become the main method for people to log in to apps in the future and facial recognition is a good addition to other biometric technologies like fingerprint and optical recognition. Over time, I expect a broad set of biometric sensors to work together. Think of how you recognize friends. You hear their voices, see their faces, and notice their unique gestures and body language. As sensors proliferate and pattern matching algorithms get more sophisticated, biometrics is one of the areas where artificial intelligence will have practical applications in daily life. The challenge, of course, is that it has to work 100% of the time and be effortless for the user.
I would recommend that every consumer and especially businesses start using biometric methods like facial or fingerprint recognition because they are much harder to compromise than a password. Use the biometric method that you find the easiest, use it wisely, and view it as a the first step to passwordless authentication
The shift towards hybrid data center environments, consisting of a mix between off-premises services, public cloud and colocation, and privately owned, distributed IT facilities, is challenging traditional approaches to physical infrastructure management. A recent study by 451 Research brought some interesting insights to light.
The study was targeted at hybrid IT environments within large enterprises from across the globe, and conducted through intensive interviews with C-suite, data center and IT executives. Sponsored by Schneider Electric, the participants were drawn from companies generating over $500 million in revenue across the US, UK and Asia Pacific. The complete report provides an in-depth analysis of the topic together with and additional observations about trends that are emerging across multiple verticals and industries.
The interviews highlight how the widespread adoption of cloud services has significantly impacted the way companies are meeting their data center infrastructure requirements. These complexities will be compounded by an anticipated groundswell of new distributed IT driven by the Internet of Things (IoT) and emerging edge computing workloads.
Edge computing
Edge computing deployment presents unique challenges differing from those of traditional data centers. They are often remote and without local IT staff support. They require a different strategy as their lifecycle is longer and they must be easy to manage, secure and deploy while also being resilient.
To realize the full value of a hybrid approach, the management of a combination of data center environments has become one of the most complex issues for modern enterprise leaders. The study also revealed common themes:
According to the new 451 Research study, operators of enterprise data centers face a rapidly evolving technology landscape and a cloud-powered wave of disruption that is changing business models, connectivity and workload management. Driving this change is the growing availability and adoption of off-premises services, such as public cloud, colocation data center offerings and Datacenter-Management-as-a-Service (DMaaS) like Schneider Electric’s EcoStruxure IT architecture
DMaaS enables optimization of the IT layer by simplifying, monitoring, and servicing data center physical infrastructure from the edge to the enterprise. Utilizing cloud-based software, it promises real-time operational visibility, alarming and shortened resolution times without all of the costs associated with deploying an on-premise DCIM system.
DMaaS is positioned in the broader context of IoT technologies and platforms, and its message aligned to resonate with a broader audience beyond the traditional data center manager. The ability to benchmark performance and set key performance indicators (KPIs) based on data center metrics will interest organizations in the midst of making decisions regarding data center and workload placement in an increasingly hybrid and distributed IT landscape.
Piloted in the U.S. DMaaS has already been used for benchmarking IT environments with more than 500 customers, 1,000 data centers, 60,000 devices and 2 million sensors. Customer feedback on the results of their implementations affirms the growing need for a cloud-based data center management solution.
451 Research concludes: “By 2019, organizations anticipate that just under half (46%) of enterprise workloads will run in on-premises IT environments, with the remainder off-premises, according to 450 enterprise respondents in our 2017 global study. Clearly, hybrid IT environments have become the norm.”
Vendor agnostic platform
Early adopters of DMaaS are already seeing results in their businesses and with their customers. Daniel Harman, Building Automation Systems Engineer at Peak10 + ViaWest, Inc explained why the cloud-based approach was the best data center management solution for his business. “ViaWest is trusted to deliver hybrid IT infrastructure solutions spanning colocation, interconnection, cloud, managed solutions and professional services to more than 4,200 customers. We chose a vendor agnostic DMaaS solution to provide one platform to monitor all the different devices in our data centers...”
Case Study – Bainbridge Island School District
Bainbridge Island School District chose DMaaS to help ensure continued availability of its innovative digital learning environment. With limited resources to manage its distributed IT and data center it provides one tap visibility to all device data, smart alarms and data-driven insights plus 24/7 digital monitoring and troubleshooting.
Network supervisor Alan Silcott says that DMaaS solutions “give me that peace of mind to know that if there is an incident, kids can continue to learn and the classrooms can continue to operate until the school day is through.
He goes on to say that DMaaS allows him “to check the status of all my data closets from my phone, at any time, in any location. It helps to know exactly where the problem is as opposed to trying to decipher it from a flood of emails.
“We have 11 buildings, 9 different schools with a data center of 35 different data closets. Technology is in every aspect of the schools. If our network were to go down for a day, it would cause serious disruption to our learning.
“If we get even just a power flicker, all of our UPSs send notification emails. It’s hard to sort through all those messages and make sure that everything comes back online safely.” Alan Silcott says that now when there’s an issue, the solution provider, “has our backs and contacts us immediately.”
Six Real-World Approaches to Managing Hybrid IT Environments – a report by 451 Research – can be downloaded from:
https://www.schneider-electric.com/promo/us/en/getPromo/75574P
Heavy industry, such as engineering and manufacturing plants, were, for a long time, primarily concerned with operational technology (OT), and the focus was on availability and safety. In recent years, however, the importance of IT infrastructure and protecting vital networks has grown and industrial organisations have had to rethink their priorities.
By Robert Wakim, Industrial Offer Manager, Stormshield.
This is not always easy to do. There is a significant gap between what engineers regard as important, and the concerns of security and IT teams. Engineers put availability and safety before anything else, understanding that any interruption to production could result in a serious set-back or delays and it must be safe because engines, motors and processors carry a physical risk to operators. IT, conversely, is less worried about systems breaking down, and more concerned that a computer network security breach could eliminate or compromise essential data and allow hackers to gain access to control systems.
IT and OT may be looking from two different viewpoints, but as the systems they underpin converge digitally, the threat from a cyber-attack grows ever more intense. Many industrial operations are developing strategies for managing the threat, but the attack surface increase frequently, and worryingly, they are often created via the simplest routes.
Cases in point –
An article in Reddit last July was provided by an engineer working in a petrochemical company. He describes receiving a call from one of the company’s factories to report that the local control room was down. He concluded that it was a ransomware attack, the same attack that hit the rest of the world in 2017, but was puzzled because no computers in the factory’s local control room were connected to the Internet. He tried to remaster all the computers, which means wiping out the current memory and reinstalling everything, but within minutes, the computers were infected again.
Further investigation revealed that the connection that was allowing the system to be re-infected came from a ‘smart’ coffee machine. This had been installed a couple of weeks previously, and its connection to the Internet allowed orders to be placed automatically. In error, it had been connected to the local control room network, and not just to an isolated Wi-Fi network. Once this entry gate was discovered and closed, the malware could no longer operate, and the factory was able to get back to normal.
This demonstrates just how easy it is for a simple, non-targeting ransomware to find an exposed attack surface and reach a critical environment. No damage was done in this case, but you don’t have to search far to find other more alarming examples.
According to news reports in 2016, cyber criminals were able to hack into the control system of a water utility company, where they altered the levels of chemicals being used to treat tap water. They were able to do this because the company’s operational control system was ageing and log-in details were stored on the front-end web server. It is unlikely that the attackers had any particular knowledge of how the flow control system worked, but this didn’t stop them from modifying application settings.
Protecting control systems
Reducing attack surfaces is essential and this means monitoring not just the network but all connected devices, such as mobiles and tablets, too. This is a huge task, but without this level of security, the infrastructure of the organisation is at risk, as are its data, its staff members, partners and customers. In the case of critical services, such as power and utilities providers, the environment and the wider community is also under threat. Cyber-attacks, which happen frequently, serve to illustrate the weakness of unprotected systems, but in industrial organisations there are operational constraints that can limit opportunities for updating the infrastructure. Of course, production must be uninterrupted, and this can mean that IT changes are not prioritised, so central devices that cover both OT and IT need to be used. This allows production systems to benefit from a combination of protection measures without availability being compromised.
The industrial sector relies heavily on Microsoft Windows environments, but this has sometimes meant that workstations have been vulnerable. An efficient infrastructure must be able to withstand highly complex cyber-attacks as well as guarding against mistakes made as a result of human error. This can be tackled with the use of various advanced components such as behavioural analysis or control of peripheral devices such as pre-enrolled or pre-scanned USB keys that otherwise would be a real danger and can expose the industrial system to various intrusions.
The other issue to consider is that as industrial systems and digital technology move closer, organisations will increasingly look to take advantage of the opportunities to move into the cloud or to enable their staff to work remotely, and this runs the risk of exposing any inherent weaknesses. Hackers only need the smallest gap, the slightest chink in the armour, to break into a system, and they are constantly alert to this type of vulnerability. Whilst manufacturers and engineers concentrate on designing highly sophisticated systems, they also need to ensure they put up solid barriers with limited, authenticated access when they integrate systems with networks.
The nature of OT systems means that they are robust, but this doesn’t mean that they are automatically protected from attack, particularly in today’s interconnected environment. As repeated incidents demonstrate, infiltration of power plants, factories and utilities is not just possible, but likely, so the priority must be given equally between ensuring availability and securing themselves against cyberattacks. High availability means that “fail-open” systems, which remain ‘open’ to allow operations to continue even when failure conditions are present, must become mandatory.
The battle between IT and OT is now being drawn on different lines because industrial companies recognise that whilst their functional priorities may be different, they both face the same serious risks. The OT and IT environments are inching closer, and this will ultimately protect the organisation, but as they do this, the importance of protecting both to ensure availability whilst combating the risk of cyberattacks becomes ever more essential.
Corporations are regularly under siege from multiple threat actors within the cyberspace. The underground cyber marketplace that flourishes around the world has allowed criminals and nations to wage long-term campaigns against corporations and government agencies. These attackers target businesses and consumers from the fog of the Dark Web.
By Rick McElroy, Security Strategist, Carbon Black and Tom Kellermann, Chief Cybersecurity Officer, Carbon Black.
Evidence suggests the Dark Web has become an economy of scale wherein the cyber-crime syndicates have begun to target the inter-dependencies of our networks, and the adoption of cloud technology has only made hindering these attacks more difficult. The cloud has given malicious actors blind spots to hide in and more avenues of attack. As our data moved to the cloud, our security programmes did not keep up.
When one starts to think of the risks facing organisations leveraging the cloud, one must begin to think about those brave fighters whose mission it was to fly into the clouds over enemy territory and deliver strategic bombing campaigns to weaken the enemy during World War II. Organisations that leverage the cloud are delivering services to customers and partners in a low visibility environment. Furthermore, as the cyber-criminal community burrows in to networks, we must appreciate that after the initial theft of data, they tend to hibernate. This hibernation allows for secondary monetisation schemes. Some of these criminal endeavours include reverse business email compromise against your customers and/or selective watering-hole attacks. Cyber criminals realise there is implicit trust in your brand – trust that can and will be exploited. The modus operandi of cybercriminals has been modernised and we should allow their offense to inform our defence, from who is accessing systems to what threats are hitting cloud endpoints. Even with anti-virus and other basic protections in place, organisations continue to be outpaced by attackers.
To analogise this challenge, consider how B17s were equipped with armour and machine guns and your servers may have AV and logging turned on, but much like early World War II, we continue to be outpaced in innovation and weaponry and we continue to lose the battles. The nickname given to the B17 was the “flying fortress.” However, it proved not to be.
One of the most complex cybercrime conspiracies of 2017 was leveraged by a group named StonePanda (that is, APT10.) Over the past year, these hackers have leveraged a sophisticated campaign of attack against Western corporations known as the “Cloud Hopper Campaign.” The BBC reported that firms in the UK, Europe and Japan were targeted by the group, and that by Infiltrating supply chains the attackers gained an easy route into many different targets. What began with a spear-phishing attack leveraging fileless malware escalated to hijacking the victim’s website and using their brand to target consumers. It then metastasized into the interconnected networks of their supply chain via cloud hopping. One important feature of this campaign was watering holes.
The Watering Holes executed a remote JavaScript-based reconnaissance to target MSSPs. Once in, they deployed HAYMAKER, a backdoor that can download and execute additional payloads in the form of modules and a secondary infection via an open-source, remote-access Trojan (RAT). These criminals were not conducting a burglary, but rather they were executing an invasion.
“During World War II, various methods were employed to protect high level bombers from flak, fighter aircraft and radar detection, including defensive armament, escort fighters, chaff and electronic jamming.”
To help ensure the success of bombing raids The Army (Air Force) failed fast and iterated through changes. One of the key takeaways was that their bomber would absolutely need fighter escorts in order to mitigate the risk of unseen attackers lurking in the clouds.
“Early models proved to be unsuitable for combat use over Europe and it was the B-17E that was first successfully used by the USAAF. The defence expected from bombers operating in close formation alone did not prove effective and the bombers needed fighter escorts to operate successfully.” The lesson here for cyber defenders is that trying to build a single “fortress” that is impervious to innovation on the attacker side is a recipe for repeated failure. Instead, organisations should deploy the following:
Next generation antivirus protection such as Cb Defense gives you prevention against attacks by interrupting attackers’ behaviour to ensure the systems supporting the strategic delivery of services for your organisation remain in service. It provides a proactive defensive posture, levelling the battlefield and tipping the advantage back to defenders.
Endpoint detection and response capabilities give visibility into tactics that attackers are using so that your team can respond and remediate faster. This raises the bar on each attack and forces the attacker to change what they are doing to attack you. It also allows your team to pinpoint root causes and remediate vulnerabilities more quickly. Furthermore, it gives them the ability to proactively find threats sooner, ensuring their strategic objectives.
It should also be noted that modern cyber operations consist of human(s) versus human(s). The adversaries want to interact with your systems when they get in. They want as much intel as possible to leverage against you and your partners. Whenever the offense pivots, so must defenders. The team that can better Observe, Orient, Decide and Act when under attack will be miles ahead of those that lack the basic visibility into what attackers are doing. This is especially true in a cloud environment. By employing both protection and visibility capabilities and partnering with a company that securely enables the cloud, organisations can move upstream of the problem and be well positioned to drive change.
So, in conclusion, how do you defend yourself from the cyber attackers looking to invade your cloud? You do it by securing the battlefield. You do it by providing better visibility into what the attackers are doing. You do it by rapidly providing visibility into what the enemy is doing and enabling teams to find them and remove them.