I recently attended a local sporting venue and, in attempting to buy some refreshments, was told that the card machine wasn’t working, owing to the poor wireless connectivity, so I was invited to go and join the very long queue at the only cash machine, pay for the privilege of accessing some money and then return to the bar to pay for the drinks. Not a great experience and one that would almost certainly put me off from returning.
One of my sons works at this venue and he tells me that they have looked at upgrading the wireless network, but the cost is deemed to be so prohibitive, that they carry on as is – with an unreliable network, and some dissatisfied customers.
This, in miniature, is the choice facing almost every business across the globe right now. For sure, all would love to have the very latest IT and offer the best possible customer service, but there’s a significant, often unaffordable, price tag to make this happen. So, many companies either make do, or carry out partial upgrades.
Most importantly, every business needs to carry out what is, in essence, a risk assessment befor making any IT investment. What are the consequences to the business of doing nothing, and what are the consequences to the business of doing everything? At the one extreme – doing nothing – there might be some loss of customers, but if the business has something of a monopoly (as is the case with this local sporting venue), does this really matter? At the other extreme, spending out large sums of money to offer the very best possible customer environment could just bankrupt a business.
There are no easy answers to the IT investment dilemma, but by carrying out a thorough risk analysis, it’s possible to arrive at what level of spending makes the most sense to the business.
Back to my sporting venue, and about an hour or so after I’d purchased the bottles of wine, the waitress who had served me came over and apologised that she’d overcharged me and gave me back the ‘extra’ money. Human nature being what it is, I was not annoyed that I’d been overcharged, rather surprised and delighted that someone had been honest enough to rectify the mistake, rather than pocket the money. So, maybe I will be returning to this venue after all!
Less than one third (31%) of data specialists, including data analysts, data scientists and data quality managers, are fully confident in their ability to deliver trusted data at speed throughout their organization, reveals a new global survey from Talend.
In the Talend commissioned survey conducted by Opinion Matters, 763 data professionals (executives and operational data workers) around the globe were queried to understand confidence levels in their organizations' ability to deal with two significant simultaneous challenges: 1) Capturing, processing, and democratizing data at speed; and 2) ensuring the reliability and integrity of the information in the data streams shared by the organization.
Trust Perception Gap between Management and Operations
According to the survey, there is a significant gap in perceptions between senior IT management and mid-ranking data professionals (operational data workers), with the former substantially more confident in their organizations' abilities.
"The different levels of confidence displayed by people at a management level and operational data workers are not surprising, but it is definitely worrying," said Ciaran Dynes, Senior VP of Products at Talend. "What we see today is that organizations are struggling to deliver trusted data when they need to deliver it and they are also struggling to gain credibility internally, in the market and with customers. Organizations need to build a bridge between IT and data workers - responsible for delivering at speed - and the people in charge of building and safeguarding trust, something which is often led at an executive level. Although generating trust may come from the top, the ability to deliver trusted data at speed requires the commitment of every data specialist within an organization as well as cultural alignment. This usually relies on the work of data champions, who have the skills to lead cultural change in data handling and processes as well."
Major findings from the survey, highlighting three significant gaps - trust, speed and execution - include:
Excellence of speed and integrity control: The Leaders and the others
Digital transformation is often about speed: accelerating time to market, driving business insights or actions in real time, or delivering personalized customer experiences. When organizations succeed in combining speed with integrity, they can deliver intelligent and trusted data in everything they do. However, despite the importance of ensuring speed and trust in data, a mere 11% of respondents consider that their businesses have reached excellence in both speed and integrity.
A significant difference between between management and operational workers
Overall, people close to data (data workers) are less confident in their organizations' abilities to trust in their own data, with only 31% showing high levels of confidence. By contrast, 46% of respondents at a management level are confident in the ability of their organizations to deliver trusted data at speed.
For regulatory compliance, one of the key criteria to evaluate trust, 52% of respondents at a management level claim to be very optimistic when it comes to having achieved compliance with data regulations, while the rate falls to 39% among the operational data workers - who may be in charge of making the practical changes to deliver compliance.
Data quality confidence remains low
The survey shows that only 38% of respondents believe their organizations excel in controlling data quality. Less than one in three (29%) operational data workers are confident their companies’ data is always accurate and up-to-date.
360-degree real-time data integration is still a challenge
Having access to real-time or at least timely data accelerates changes and helps organizations to make faster, more reliable strategic business decisions, which lead to better outcomes.
According to the survey, only 34% of operational data workers believe in their organizations' capability to succeed in a 360-degree real-time data integration process whereas respondents at a management level again feel more confident (46%) in this regard.
"We’ve entered the era of the information economy, where data has become the most critical asset for every single organization," continues Dynes. "Data-driven strategies are imperative for success in any industry. To support business objectives such as revenue growth, profitability, and customer satisfaction, organizations require trusted data which can be delivered when it is needed and relied upon to drive critical business insights. Trust in data has to be paramount because without trusted data there can be no confidence in business decisions, and at that point stakeholder and customer trust will quickly evaporate too."
Two in three organizations plan to deploy Artificial Intelligence to bolster their defense as soon as 2020.
Businesses are increasing the pace of investment in AI systems to defend against the next generation of cyberattacks, a new study from the Capgemini Research Institute has found. Two thirds (69%) of organizations acknowledge that they will not be able to respond to critical threats without AI. With the number of end-user devices, networks, and user interfaces growing as a result of advances in the cloud, IoT, 5G and conversational interface technologies, organizations face an urgent need to continually ramp up and improve their cybersecurity.
The “Reinventing Cybersecurity with Artificial Intelligence: the new frontier in digital security” study surveyed 850 senior IT executives from IT information security, cybersecurity and IT operations across 10 countries and seven business sectors, and conducted in-depth interviews with industry experts, cybersecurity startups and academics.
Key findings include:
AI-enabled cybersecurity is now an imperative: Over half (56%) of executives say their cybersecurity analysts are overwhelmed by the vast array of data points they need to monitor to detect and prevent intrusion. In addition, the type of cyberattacks that require immediate intervention, or that cannot be remediated quickly enough by cyber analysts, have notably increased, including:
·cyberattacks affecting time-sensitive applications (42% saying they had gone up, by an average of 16%).
·automated, machine-speed attacks that mutate at a pace that cannot be neutralized through traditional response systems (43% reported an increase, by an average of 15%).
Facing these new threats, a clear majority of companies (69%) believe they will not be able to respond to cyberattacks without the use of AI, while 61% say they need AI to identify critical threats. One in five executives experienced a cybersecurity breach in 2018, 20% of which cost their organization over $50m.
Executives are accelerating AI investment in cybersecurity: A clear majority of executives accept that AI is fundamental to the future of cybersecurity:
·64% said it lowers the cost of detecting breaches and responding to them – by an average of 12%.
·74% said it enables a faster response time: reducing time taken to detect threats, remedy breaches and implement patches by 12%.
·69% also said AI improves the accuracy of detecting breaches, and 60% said it increases the efficiency of cybersecurity analysts, reducing the time they spend analyzing false positives and improving productivity.
Accordingly, almost half (48%) said that budgets for AI in cybersecurity will increase in FY2020 by nearly a third (29%). In terms of deployment, 73% are testing use cases for AI in cybersecurity. Only one in five organizations used AI pre-2019 but adoption is poised to skyrocket: almost two out of three (63%) organizations plan to deploy AI by 2020 to bolster their defenses.
“AI offers huge opportunities for cybersecurity,” says Oliver Scherer, CISO of Europe’s leading consumer electronics retailer, MediaMarktSaturn Retail Group. “This is because you move from detection, manual reaction and remediation towards an automated remediation, which organizations would like to achieve in the next three or five years.”
However, there are significant barriers to implementing AI at scale: The number-one challenge for implementing AI for cybersecurity is a lack of understanding of how to scale use cases from proof of concept to full-scale deployment. 69% of those surveyed admitted that they struggled in this area.
Geert van der Linden, Cybersecurity Business Lead at Capgemini Group says “Organizations are facing an unparalleled volume and complexity of cyber threats and have woken up to the importance of AI as the first line of defense. As cybersecurity analysts are overwhelmed, close to a quarter of them declaring they are not able to successfully investigate all identified incidents, it is critical for organizations to increase investment and focus on the business benefits that AI can bring in terms of bolstering their cybersecurity.”
Additionally, half of surveyed organizations cited integration challenges with their current infrastructure, data systems, and application landscapes. Although the majority of executives say they know what they want to achieve from AI in cybersecurity, only half (54%) have identified the data sets required to operationalize AI algorithms.
Anne-Laure Thieullent, AI and Analytics Group Offer Leader at Capgemini concludes “Organizations must first look to address the underlying implementation challenges that are preventing AI from reaching its full potential for cybersecurity. This means creating a roadmap to address key barriers and focusing on use cases that can be scaled most easily and deliver the best return. Only by taking these steps can organizations equip themselves for the rapidly evolving threat of cyberattacks. By doing so, they will save themselves money, and reduce the likelihood of a devastating data breach.”
Fujitsu has released its Fujitsu Future Insights Global Digital Transformation Survey Report 2019, which highlights the results of its survey conducted among 900 CxOs and decision-makers at large and mid-sized companies spread across different industries in 9 countries.
With this survey Fujitsu aims to understand the state of their digital transformation journeys with regard to AI and other advanced technologies, and to clarify how business leaders around the world perceive the concept of "trust", an increasingly urgent theme in recent years. The third iteration of this survey also revealed the six success factors in digital transformation initiatives and the importance of organizational abilities such as leadership. The survey additionally takes an in-depth look at trust toward online data and decisions made by AI or a person.
Fujitsu will ultimately draw on these findings to accelerate its work with customers in advancing their digital transformation initiatives to achieve greater trust in business and society.
We live today in a world that is more connected, more globally integrated, and faster paced than ever before. While the benefits delivered by digital technology seem obvious and ubiquitous, issues surrounding the trustworthiness of personal data control and decisions made by AI remain a growing concern.
In light of these persistent challenges, Fujitsu embarked on the third iteration of its Global Digital Transformation Survey, first carried out in 2017, revealing the status of digital transformation initiatives and clarifying how global business leaders perceive "trust", which represent important themes in driving business success in recent years.
Summary of Survey Findings
1. Status of Digital Transformation
87% of companies surveyed have already begun their digital transformation journey. Players in financial services and transportation companies were found to be the most advanced in their initiatives. About half of companies in these industries delivered positive outcomes.
Fujitsu's previous survey revealed that six organizational capabilities are required to deliver positive outcomes in digital transformation projects: Leadership, Ecosystem, Empowered people, A Culture of Agility, Value from Data, and Business Integration. Analysis of this year's survey also reveals that successful companies possess these organizational capabilities, which we refer to as "digital muscles."
2. Trust in Online Data
72% of respondents were worried that organizations may exploit personal data without their permission. In some cases, however, respondents found it acceptable to provide personal data. These include cases in which the company receiving the personal data can be trusted and where the personal data provided can be used to enhance products and services.
3. Decisions Made by AI or by a Person
Respondents remain uncertain whether they better trust decisions made by AI or by a person. Our survey shows that respondents tend to trust decisions made by AI more in situations where the human impact is less significant. Moreover, 63% of respondents said that they would trust decisions made by AI if the AI shows substantial reasons for reaching the decisions, and 66% indicated that they would trust a company that published a code of ethics governing the use of AI.
4. Empowering People to Drive Successful Digital Transformation
Companies in which management places an emphasis on long-term perspectives, practices empathic leadership by sharing their messages and passion with employees, and empowers their staff tend to achieve greater success in their journeys toward digital transformation.
A greater understanding and acceptance of the value of Augmented Reality (AR) and Virtual Reality (VR) solutions across enterprise verticals are driving continued investment and growth. In addition, advancements in hardware and software systems and increasing capabilities are creating a robust objective value among numerous use cases. According to ABI Research, a global tech market advisory firm, this growth trend will continue as more ROI data points become available, and enterprise AR/VR customers continue to scale their efforts. Case studies highlight immense ROI for AR/VR integrations, with training time reduction up to 50 % and general efficiency increases showing similar results.
“Successful case studies have proven that businesses can see immediate and notable ROI with use cases such as remote expertise and AR training,” says Eleftheria Kouri, Research Analyst at ABI Research. For example, traveling costs or training courses that require instructors and external facilities dramatically increase expenses. AR and VR solutions have proven to minimize travel and at the same time, allow multiple trials and repeats during or after a formal training, at zero cost. No more consuming extra resources/materials or requiring supervision. Emerging technologies also have a direct impact on other crucial metrics, such as significantly reducing employee accidents while increasing employee engagement and satisfaction.
Companies such as Portico and PixoVR have proven that leveraging VR solutions reduces training time significantly. Porsche decreased service resolution time by 40% when utilizing AR guidance and remote assistance for maintenance, and Boeing managed to reduce production time by 35% with AR compared to traditional documentation and training methods. There are numerous successful examples in the market that prove the advantages that emerging solutions bring in terms of employee efficiency, improved production and quality, customer service, and machine lifecycle. Overall Equipment Effectiveness (OEE) can benefit dramatically from AR/VR, especially with reduced service resolution rate, error/defect rate, and total unplanned downtime.
“A deeper examination of AR/VR applications, integration with other disruptive technologies such as IoT or AI, and increasing successful implementations in the market continuously prove the value of AR/VR solutions in the enterprise,” concludes Kouri. “Advancements and more competitive prices in hardware and software will naturally happen, and more long-term implementations are necessary for minimizing uncertainty, for identifying and addressing potential challenges such as security or change management, and for determining best practices that drive successful and stable digital transformation.”
Apptio has released a report highlighting how the changing economics of IT and the push for digital transformation is disrupting C-suite dynamics and positioning the CIO as the leader most effective at driving change.
The survey, Disruption in the C-suite: How the digital transformation imperative is changing CxO dynamics and technology strategy, conducted in partnership with Financial Times (FT) Focus, the independent thought leadership arm of the Financial Times, found that digital transformation taking place in almost every company, in every industry, means that agility, advocacy and data is the new IT currency.
“Digital is completely transforming the IT operating model, and that means CIOs and CFOs need to work collaboratively,” said Sunny Gupta, Apptio CEO. “These business leaders need to accelerate new delivery models such as hybrid and cloud, optimise growth investments to fund innovation, and boost financial agility so that IT finances can be managed and adjusted in real-time based on the highest needs of the business.”
The survey finds that the pursuit of digital transformation has led to a greater spirit of collaboration in the C-suite, and greater trust in IT across the business. Yet this report also reveals a blurring of responsibilities, tensions between IT and finance, and a critical role for the CIO in reshaping the organisation for sustained growth.
“Technology leaders are more emboldened to drive organisational change: their priorities are shifting as they take a more agile approach to IT strategy,” said Sean Kearns, Editorial Director for FT Focus. “Customer expectation, and businesses’ corresponding sense of urgency, is changing the dynamics of the C-suite.”
Strategy at speed
More than half (56%) of organisations who are embracing digital transformation say they are adopting an agile, flexible strategic approach that constantly evolves based on continual learning from the business and customers. And while technology leaders seek to drive growth through innovation, they expect to maintain almost the same proportion of effort enabling business model change over the next three years, as they balance the need for growth with the need to operate existing IT systems.
The new C-suite
More than two-thirds (68%) of global respondents agree that digital transformation has strengthened collaboration across the C-suite leaders when it comes to developing new products and services. Yet, 47% of business leaders say digital transformation blurs the lines of roles and responsibilities. This doesn’t mean that all leaders are necessarily aligned on business priorities or technology strategy. CIOs and CFOs are seen as the least aligned, with only 23% of UK respondents saying the two functions are in deep alignment, compared to a global average of 30%. These new dynamics are creating tension – especially between finance and the IT leadership.
The power of persuasion
This dynamic creates an enormous opportunity for CIOs to drive change. Survey findings highlight that CIOs are now considered the C-suite leader most effective at delivering change based on customer insight, even more than the CMO or CEO. But in order for CIOs to take advantage of the enormous opportunity this poses to be the change-driver in their organization, CIOs need to communicate effectively with the rest of the business and influence all stakeholders. Seventy-one percent of finance leaders say that the IT function needs to develop greater influencing skills in order to deliver the change their business requires. IT leaders need to develop those communication skills within their teams and ensure that they are equipped with the right blend of technical, business and influencing skills.
Decisions and where to make them
Companies are unsettled about how important technology decisions are made and evaluated. The cloud is crucial to meeting digital aspirations, but concerns over governance pose challenges for adoption and migration, causing only 30% of leaders to feel confident in IT’s ability to govern cloud across the business. Agile delivers value in accelerating adoption of new technology and enabling digital transformation, but greater clarity is needed on tracking performance. Less than one fifth of companies (16%) have a clearly defined framework to map success across the business, and UK leaders are among the most likely to adopt agile without this framework (24%).
Leading with data
Real-time data gives IT leaders the power to assess their investments in new technology and make better decisions. According to 51% of leaders in our survey, the IT function is taking a more proactive stance on data leadership across the business compared with other functions, and they say that this approach is paying off. Of those who say that the IT function is taking a more proactive stance, 58% say that this approach is very effective in helping to meet growth targets.
Continuing unprecedented demand for new datacentres, fears around the shortage of skilled professionals, concerns about the future disruption of 5G, and the limited impact of Brexit are some of the key findings from the latest industry survey from Business Critical Solutions (BCS).
The Summer Report, now in its 10th year, is undertaken by independent research house IX Consulting, who capture the views of over 300 senior datacentre professionals across Europe, including owners, operators, developers, consultants and end users. It is commissioned by BCS, the specialist services provider to the digital infrastructure industry.
The report highlights the rising demand for datacentres with almost two thirds of users exceeding 80% of their capacity today, 70% having increased capacity in the last six months and almost 60% planning increase capacity next year. This demand is currently being driven by cloud computing with over three quarters of respondents identifying 5G and Artificial Intelligence (AI) as disruptors for the future. With industry predictions that edge computing will have 10 times the impact of cloud computing in the future, half of respondents believe it will be the biggest driver of new datacentres. However, the survey found that the market remains confident that supply can be maintained, with over 90% of developers stating they have expanded their datacentre portfolio in the last six months.
With regards to supply, there are concerns that a shortage of sufficiently qualified professionals at the design and build stages will cause a bottle neck, with 64% of datacentre users and experts believing there is a lack of skilled design resource in the UK. AI and Machine Learning may help to mitigate these issues with nearly two thirds of respondents confident that datacentres will utilise these to simplify operations and drive efficiency.
The political uncertainty around Brexit continues to impact the sector with 78% of respondents believing that it will create an increase in demand for UK-based datacentres. However, the overall feeling was that the fundamentals underpinning the demand for datacentre space, such as the continued proliferation of technology-led services, outweighs these concerns and the European datacentre market will overcome any difficulties that occur.
Commenting on the report, James Hart, CEO at BCS, said: “As always this report makes for fascinating reading and I was encouraged by the overwhelming positive sentiment to forecast growth and the limited impact of Brexit. The fact that half of our respondents believe that edge computing will be the biggest driver of new datacentres tallies with our own convictions. We believe that the edge of the network will continue to be at the epicentre of innovation in the datacentre space and we are seeing a strong increase in the number of clients coming to us for help with the development of their edge strategy and rollouts.”
RiskIQ has released its annual “Evil Internet Minute” report today. The company tapped proprietary global intelligence and third-party research to analyse the volume of malicious activity on the internet, revealing that cybercriminals cost the global economy £2.3 million every minute last year, a total of £1.2 trillion.
The data shows that in a single internet minute, £2,300,000 is lost to cybercrime. Top companies pay £20 per minute due to security breaches. Additional malicious activity includes:
"As the scale of the internet continues to proliferate, so does the threat landscape," said Lou Manousos, CEO of RiskIQ. "By compiling the vast numbers associated with cybercrime in the past year, we made the research more accessible by framing it in the context of an 'internet minute.' We are entering our third year defining the sheer scale of attacks that take place across the internet using the latest third-party research and our own global threat intelligence so that businesses can better understand what they're up against on the open web."
Tactics range from malvertising to phishing to supply chain attacks that target e-commerce, like the Magecart hacks that have increased by 20 percent in the last year. The motives of cybercriminals include monetary gain, large-scale reputational damage, political motivations, and espionage.
“Without greater awareness and an increased effort to implement necessary security controls, there will be more attacks using an ever-expanding range of technologies and strategies,” Manousos said. “With the recent explosion of web and browser-based threats, organisations should look to what can happen in a matter of minutes and evaluate their current security strategy. Businesses must realise that they are vulnerable beyond the firewall, all the way across the open internet."
Every day, there are countless headlines in the world’s media documenting the damage of cyber attacks. It’s almost always the financial and reputational damage inflicted upon organisations that’s focused upon, rather than the human cost of these attacks.
In order to find out more about this, Barracuda Networks carried out a study of 660 IT security professionals in organisations between 100 to over 5000 employees across the globe. Of those, almost 20% of responses (124) came from EMEA.
The results make worrying reading for businesses, suggesting that employee productivity is under considerable threat from email security attacks. At a time when organisations are looking to maximise budgets and resources, we discovered that 40% of IT professionals in EMEA consider email security attacks to have a negative impact on employee productivity.
The human element
According to the survey, EMEA IT teams receive more suspicious emails than the global average, with 7% receiving over 50 per day and a third (32%) receiving between six and 50 per day.
Although 44% of respondents agreed that very few (less than 10%) of the suspicious emails reported turn out to be fraudulent, the time taken to identify and respond to email reports on this scale is taking its toll on IT teams’ productivity. So much so, that the vast majority (81%) admitted spending over 30 minutes investigating and remediating each email attack, while 47% spend over an hour per attack.
It’s clear email attack management has created a significant overhead for EMEA organisations. Without the correct automated incident response tools in place to alleviate the stress and complexity of email attacks, the manual investigation and resolution time can only be expected to rise.
A rising concern outside of the workplace
Outside of the office, these attacks are also affecting the well-being of IT professionals at home. Over a third (38%) blame email attacks for increased stress at work, with senior IT leaders most likely to suffer this impact.
The same number (38%) admit to worrying about email attacks outside of work hours with 16% having to cancel personal plans due to email attacks. Additional stress comes from the potential reputation damage that comes from successful attacks, which 32% admit is a concern.
This stress is also reinforced by respondents lack of faith in their organisation’s security. Over half (52%) of EMEA respondents claim that their organisation’s security is unlikely to have improved in the last year, compared to the global average of 63%. The global results also identified EMEA as the region most likely to fall victim to spear phishing attacks, with 48% of EMEA organisations falling victim to spear phishing in the past twelve months.
The results found that the impact of spear phishing attacks on the reputation of organisations in EMEA is much higher compared to other regions; 39% of EMEA respondents reported damage to the reputation over the past year, compared to the global average of 27%.
Combine this with the fact that EMEA respondents believe their investment is lagging behind the rest of the world when it comes to dedicated spear phishing and automated incident response and we begin to see the worrying spot EMEA IT teams find themselves in.
But why is this the case? Especially since IT professionals know that successful security requires a combination of innovative technology and effective training.
A lagging region
Firstly, EMEA budgets are increasing at a much slower rate than the rest of the world. This could be attributed to the pattern of spending on email security, which shows that over half (54%) of EMEA organisations have not changed their spending over the past year, versus a global average of 45%, while only 39% have increased their spending (compared to 48% worldwide).
In addition to this, most organisations are also lacking correct security awareness training, with almost a quarter (23%) of EMEA respondents admitting that they have never received email attack training, compared to the global average of 17%. Less than a quarter (21%) of EMEA respondents had received sufficient email security training in the past three months. For context, in the US this number almost doubled at 39%.
Across the board EMEA IT security teams are lagging behind their global peers when it comes to email security, which is having a direct impact on the productivity and stress of their employees. Be it the right tools, the right training or more, it’s clear EMEA organisations have far to go to bridge the gap and turn their employees into an effective line of defence as part of a wider holistic email protection strategy.
Link11, a leader in cloud-based anti-DDoS protection, has published its DDoS statistics for Q2 2019. The data shows that the quarter saw a massive 97% year-on-year increase in average attack bandwidth, up from 3.3Gbps in Q2 2018 to 6.6Gbps in Q2 2019.
These attacks are easily capable of overloading many companies’ broadband connections. There are several DDoS-for-hire services offering attacks between 10 and 100 Gbps for a modest fee. Currently, one DDoS provider is offering free DDoS attacks of up to 200 Mbps bandwidth for a duration of five minutes.
The maximum attack volumes seen by Link11 between April and June 2019 also increased by 25% year-on-year, to 195Gbps from 156Gbps in Q2 2018. In addition, 19 more high-volume attacks with bandwidths over 100 Gbps were registered in Q2 2019.
Rolf Gierhard, Vice President Marketing at Link11 said: "Too many companies still have the wrong idea when it comes to the threat posed by DDoS attacks. Our data shows that the gap between attack volumes, and the capability of corporate IT infrastructures to withstand them, is widening from quarter to quarter. Given the scale of the threat that organizations are facing, and the fact that the attacks are deliberately aimed at causing maximum disruption, it’s clear that businesses need to deploy advanced techniques to protect themselves against DDoS exploits."
Increasing complexity of attacks
Multi-vector attacks posed an additional threat in Q2 2019, with a significant increase in complex attack patterns. The proportion of multi-vector attacks grew from 45% in Q2 2018 to 63% in the second quarter of 2019. Attackers most frequently combined three vectors (47%), followed by two vectors (35%) and four vectors (15%). The maximum number of attack vectors seen was seven.
Further findings from Link11’s Q2 DDoS statistics include:
IBM Security has published the results of its annual study examining the financial impact of data breaches on organizations.
According to the report, the cost of a data breach has risen 12% over the past 5 years and now costs $3.92 million on average. These rising expenses are representative of the multiyear financial impact of breaches, increased regulation and the complex process of resolving criminal attacks.
The financial consequences of a data breach can be particularly acute for small and midsize businesses. In the study, companies with less than 500 employees suffered losses of more than $2.5 million on average – a potentially crippling amount for small businesses, which typically earn $50 million or less in annual revenue.
For the first time this year, the report also examined the longtail financial impact of a data breach, finding that the effects of a data breach are felt for years. While an average of 67% of data breach costs were realized within the first year after a breach, 22% accrued in the second year and another 11% accumulated more than two years after a breach. The longtail costs were higher in the second and third years for organizations in highly-regulated environments, such as healthcare, financial services, energy and pharmaceuticals.
"Cybercrime represents big money for cybercriminals, and unfortunately that equates to significant losses for businesses," said Wendi Whitmore, Global Lead for IBM X-Force Incident Response and Intelligence Services. "With organizations facing the loss or theft of over 11.7 billion records in the past 3 years alone, companies need to be aware of the full financial impact that a data breach can have on their bottom line –and focus on how they can reduce these costs."
Sponsored by IBM Security and conducted by the Ponemon Institute, the annual Cost of a Data Breach Report is based on in-depth interviews with more than 500 companies around the world that suffered a breach over the past year.3 The analysis takes into account hundreds of cost factors including legal, regulatory and technical activities to loss of brand equity, customers, and employee productivity. Some of the top findings from this year's report include:
Malicious Breaches Pose a Growing Threat; Accidental Breaches Still Common
The study found that data breaches which originated from a malicious cyberattack were not only the most common root cause of a breach, but also the most expensive.
Malicious data breaches cost companies in the study $4.45 million on average – over $1 million more than those originating from accidental causes such as system glitch and human error. These breaches are a growing threat, as the percentage of malicious or criminal attacks as the root cause of data breaches in the report crept up from 42% to 51% over the past six years of the study (a 21% increase).
That said, inadvertent breaches from human error and system glitches were still the cause for nearly half (49%) of the data breaches in the report, costing companies $3.50 and $3.24 million respectively. These breaches from human and machine error represent an opportunity for improvement, which can be addressed through security awareness training for staff, technology investments, and testing services to identify accidental breaches early on. One particular area of concern is the misconfiguration of cloud servers, which contributed to the exposure of 990 million records in 2018, representing 43% of all lost records for the year according to the IBM X-Force Threat Intelligence Index5.
Breach Response Remains Biggest Cost Saver
For the past 14 years, the Ponemon Institute has examined factors that increase or reduce the cost of a breach and has found that the speed and efficiency at which a company responds to a breach has a significant impact on the overall cost.
This year's report found that the average lifecycle of a breach was 279 days with companies taking 206 days to first identify a breach after it occurs and an additional 73 days to contain the breach. However, companies in the study who were able to detect and contain a breach in less than 200 days spent $1.2 million less on the total cost of a breach.
A focus on incident response can help reduce the time it takes companies to respond, and the study found that these measures also had a direct correlation with overall costs. Having an incident response team in place and extensive testing of incident response plans were two of the top three greatest cost saving factors examined in the study. Companies that had both of these measures in place had $1.23 million less total costs for a data breach on average than those that had neither measure in place ($3.51 million vs. $4.74 million).
Additional factors impacting the cost of a breach for companies in the study included:
Regional and Industry Trends
The study also examined the cost of data breaches in different industries and regions, finding that data breaches in the U.S. are vastly more expensive – costing $8.19 million, or more than double the average for worldwide companies in the study. Costs for data breaches in the U.S. increased by 130% over the past 14 years of the study; up from $3.54 million in the 2006 study.
Additionally, organizations in the Middle East reported the highest average number of breached records with nearly 40,000 breached records per incident (compared to global average of around 25,500.)
For the 9th year in a row, healthcare organizations in the study had the highest costs associated with data breaches. The average cost of a breach in the healthcare industry was nearly $6.5 million - over 60% higher than the cross-industry average.
According to a new global survey from CyberArk, 50 percent of organizations believe attackers can infiltrate their networks each time they try. As organizations increase investments in automation and agility, a general lack of awareness about the existence of privileged credentials – across DevOps, robotic process automation (RPA) and in the cloud – is compounding risk.
According to the CyberArk Global Advanced Threat Landscape 2019 Report, less than half of organizations have a privileged access security strategy in place for DevOps, IoT, RPA and other technologies that are foundational to digital initiatives. This creates a perfect opportunity for attackers to exploit legitimate privileged access to move laterally across a network to conduct reconnaissance and progress their mission.
Preventing this lateral movement is a key reason why organizations are mapping security investments against key mitigation points along the cyber kill chain, with 28 percent of total planned security spend in the next two years to focus on stopping privilege escalation and lateral movement.
Proactive investments to reduce risk are critical given what this year’s survey respondents cite as their top threats:
Security Barriers to Digital Transformation and the Privilege Priority
The survey found that while organizations view privileged access security as a core component of an effective cybersecurity program, this understanding has not yet translated to action for protecting foundational digital transformation technologies.
“Organizations are showing increasing understanding of the importance of mitigation along the cyber kill chain and why preventing credential creep and lateral movement is critical to security,” said Adam Bosnian, executive vice president, global business development, CyberArk. “But this awareness must extend to consistently implementing proactive cybersecurity strategies across all modern infrastructure and applications, specifically reducing privilege-related risk in order to recognize tangible business value from digital transformation initiatives.”
Global Compliance Readiness
According to the survey, a surprising 41 percent of organizations would be willing to pay fines for non-compliance with major regulations, but would not change security policies even after experiencing a successful cyber attack. On the heels of more than $300M in General Data Protection Regulation (GDPR) fines being levied on global organizations for data breaches, this mindset is not sustainable.
The survey also examined the impact of major regulations around the world:
DataStax has published results from an IT Architecture Modernization Trends survey, showing that 99% of IT execs report challenges with architecture modernization and 98% report challenges with their corporate data architectures (data silos). Vendor lock-in (95%) was of particular concern among respondents.
The survey, conducted in conjunction with Dimensional Research and DataStax, takes the pulse of IT architecture modernization trends by investigating current experiences with and plans to reduce complexity and cost around architecture modernization. Respondents included more than 300 executives who work for companies of more than 5,000 employees.
The resulting report provided a number of key insights, including:
The report dives into all the key findings above, showing exactly how all respondents answered the questions and detailing the key drivers and challenges behind their architecture modernization efforts.
“What this report makes clear is that data is certainly the hardest part of architecture modernization,” said DataStax SVP and Chief Product Officer Robin Schumacher. “While the cloud makes so many things around architectures much easier, it also creates additional data-related challenges. DataStax helps enterprises face those challenges so that architecture modernization goes from a daunting task to one that makes it easier for them to out-innovate their competition.”
Organizations are concerned about their ability to keep up with a rapidly changing business landscape, driven in part by concerns about their own organizations’ lagging and misconceived digitalization strategies, according to Gartner, Inc.’s latest Emerging Risks Monitor Report.
In the second quarter of 2019, Gartner surveyed 133 senior executives across industries and geographies, and the results showed that “pace of change” had emerged as the top emerging risk in the 2Q19 Emerging Risk Monitor survey (see Table 1). Last quarter’s top emerging risk, “accelerating privacy regulation,” has now become an established risk after ranking on four previous emerging risk reports.
Closely linked to the concern around pace of change are two operational risks, including “lagging digitalization” and “digitalization misconceptions,” which Gartner experts said may be partly driving the top concern around pace of change and related threats from business model disruption.
“Among the top five emerging risks in the quarter’s survey, the linkages are clear,” said Matt Shinkman, managing vice president and risk practice leader in the Gartner audit and risk practice. “Organizations are concerned with the pace of business change and vulnerability to disruption. Part of the reason they may feel this risk so acutely is related concerns around their own operations, including digitalization strategies and an inadequate talent pipeline.”
Table 1. Top Five Risks by Overall Risk Score: 3Q18-2Q19
Accelerating Privacy Regulation
Accelerating Privacy Regulation
Pace of Change
Accelerating Privacy Regulation
Pace of Change
Pace of Change
Cyber Security Disclosure
Artificial Intelligence (AI)/Robotics Skill Gap
Source: Gartner (July 2019)
Seventy-one percent of respondents indicated that pace of change was a key risk facing their organizations. This risk was a consistent concern across industries, with particularly high ratings in healthcare, insurance and industrials, with executives in these industries indicating pace of change as a top emerging risk with a frequency of 70% or higher.
The concern around pace of change is driven by fears of being disrupted by nimbler competitors and a lack of clear avenues for growth. This risk can materialize through a rise in the number of new, disruptive competitors, a failure of the brand proposition to meet client needs or demands and executives not responding to macro trends and changing consumer needs.
Risk leaders have a role to play in inserting themselves early in the strategic planning process and to work across function by collaborating with strategy and finance teams to encourage positive risk taking, such as transformative measures to the business.
“Although the pace of business change is the top concern among organizations, we see a lack of tangible action among many organizations to address it,” said Mr. Shinkman. “Twenty-four percent of organizations report no action to address the impact of the pace of change, while only 28% are elevating this risk to the board.”
Digitalization Concerns Increase Vulnerabilities
Other emerging risks that may be contributing to executives’ concerns around pace of change are related to digitalization:
Worldwide IaaS Public Cloud Services market grew by over 30 percent in 2018
The worldwide infrastructure as a service (IaaS) market grew 31.3% in 2018 to total $32.4 billion, up from $24.7 billion in 2017, according to Gartner, Inc. Amazon was once again the No. 1 vendor in the IaaS market in 2018, followed by Microsoft, Alibaba, Google and IBM.
"Despite strong growth across the board, the cloud market’s consolidation favors the large and dominant providers, with smaller and niche providers losing share,” said Sid Nag, research vice president at Gartner. “This is an indication that scalability matters when it comes to the public cloud IaaS business. Only those providers who invest capital expenditure in building out data centers at scale across multiple regions will succeed and continue to capture market share. Offering rich feature functionality across the cloud technology stack will be the ticket to success, as well.”
In 2018, the top five IaaS providers accounted for nearly 77% of the global IaaS market, up from less than 73% in 2017. Market consolidation will continue through 2019, driven by the high rate of growth for the top providers, which experienced aggregate growth of 39% from 2017 to 2018 compared with the more modest growth of 11% for all other providers during the same period. “Consolidation will occur as organizations and developers look for standardized, broadly supported platforms for developing and hosting cloud applications,” said Mr. Nag.
Amazon continued to lead the worldwide IaaS market with an estimated $15.5 billion of revenue in 2018, up 27% percent from 2017 (see Table 1). The largest of the IaaS providers, Amazon accounts for nearly half of the total IaaS market. It continues to aggressively expand into new IT markets via new services, as well as acquisitions, growing its core cloud business.
Worldwide IaaS Public Cloud Services Market Share, 2017-2018 (Millions of U.S. Dollars)
| || |
Market Share (%)
Market Share (%)
| || |
Source: Gartner (July 2019)
Microsoft secured the No. 2 position in the IaaS market with revenue surpassing $5 billion in 2018, up from $3.1 billion in 2017. Microsoft delivers its IaaS capabilities through its innovative and open Microsoft Azure offering, which continues to solidify its position as a leading IaaS provider.
The dominant IaaS provider in China, Alibaba Cloud, experienced the strongest growth among the leading vendors, growing 92.6% in 2018. The company has built an ecosystem consisting of managed service providers (MSPs) and independent software vendors (ISVs). Its success last year was driven by aggressive R&D investment in its portfolio of offerings, especially compared with its hyperscale provider counterparts. Alibaba has the financial capability to continue this trend and invest in global expansion.
Google came in at the No. 4 spot, growing 60.2% in revenue from 2017. “Google’s cloud offering is something to keep an eye on with its new leadership focus on customers and shift toward becoming a more enterprise-geared offering,” said Mr. Nag.
“As the cloud business continues to gather momentum and hyperscale cloud providers consolidate the market, product managers at cloud MSPs must look at other ways to differentiate, such as focusing on vertical industries and getting certified in the hyperscale cloud provider partner programs in order to drive revenue,” said Nag.
Wireless technology plays a key role in today’s communications, and new forms of it will become central to emerging technologies including robots, drones, self-driving vehicles and new medical devices over the next five years. Gartner, Inc. has identified the top 10 wireless technology trends for enterprise architecture (EA) and technology innovation leaders.
“Business and IT leaders need to be aware of these technologies and trends now,” said Nick Jones, distinguished research vice president at Gartner. “Many areas of wireless innovation will involve immature technologies, such as 5G and millimeter wave, and may require skills that organizations currently don’t possess. EA and technology innovation leaders seeking to drive innovation and technology transformation should identify and pilot innovative and emerging wireless technologies to determine their potential and create an adoption roadmap.”
The top 10 wireless technology trends are:
Wi-Fi has been around a long time and will remain the primary high-performance networking technology for homes and offices through 2024. Beyond simple communications, Wi-Fi will find new roles — for example, in radar systems or as a component in two-factor authentication systems.
2. 5G Cellular
5G cellular systems are starting to be deployed in 2019 and 2020. The complete rollout will take five to eight years. In some cases, the technology may supplement Wi-Fi, as it is more cost-effective for high-speed data networking in large sites, such as ports, airports and factories. “5G is still immature, and initially, most network operators will focus on selling high-speed broadband. However, the 5G standard is evolving and future iterations will improve 5G in areas such as the Internet of Things (IoT) and low-latency applications,” Mr. Jones added.
3. Vehicle-to-Everything (V2X) Wireless
Both conventional and self-driving cars will need to communicate with each other, as well as with road infrastructure. This will be enabled by V2X wireless systems. In addition to exchanging information and status data, V2X can provide a multitude of other services, such as safety capabilities, navigation support and infotainment.
“V2X will eventually become a legal requirement for all new vehicles. But even before this happens, we expect to see some vehicles incorporating the necessary protocols,” said Mr. Jones. “However, those V2X systems that use cellular will need a 5G network to achieve their full potential.”
4. Long-Range Wireless Power
First-generation wireless power systems have not delivered the revolutionary user experience that manufacturers had hoped for. In terms of the user experience, the need to place devices on a specific charger point is only slightly better than charging via cable. However, several new technologies can charge devices at ranges of up to one meter or over a table or desk surface.
“Long-range wireless power could eventually eliminate power cables from desktop devices such as laptops, monitors and even kitchen appliances. This will allow for completely new designs of work and living spaces,” Mr. Jones said.
5. Low-Power Wide-Area (LPWA) Networks
LPWA networks provide low-bandwidth connectivity for IoT applications in a power-efficient way to support things that need a long battery life. They typically cover very large areas, such as cities or even entire countries. Current LPWA technologies include Narrowband IoT (NB-IoT), Long Term Evolution for Machines (LTE-M), LoRa and Sigfox. The modules are relatively inexpensive, so IoT manufacturers can use them to enable small, low-cost, battery-powered devices such as sensors and trackers.
6. Wireless Sensing
The absorption and reflection of wireless signals can be used for sensing purposes. Wireless sensing technology can be used, for example, as an indoor radar system for robots and drones. Virtual assistants can also use radar tracking to improve their performance when multiple people are speaking in the same room.
“Sensor data is the fuel of the IoT. Accordingly, new sensor technologies enable innovative types of applications and services,” Mr. Jones said. “Systems including wireless sensing will be integrated in a multitude of use cases, ranging from medical diagnostics to object recognition and smart home interaction.”
7. Enhanced Wireless Location Tracking
A key trend in the wireless domain is for wireless communication systems to sense the locations of devices connected to them. High-precision tracking to around one-meter accuracy will be enabled by the forthcoming IEEE 802.11az standard and is intended to be a feature of future 5G standards.
“Location is a key data point needed in various business areas, such as consumer marketing, supply chain and the IoT. For example, high-precision location tracking is essential for applications involving indoor robots and drones,” said Mr. Jones.
8. Millimeter Wave Wireless
Millimeter wave wireless technology operates at frequencies in the range of 30 to 300 gigahertz, with wavelengths in the range of 1 to 10 millimeters. The technology can be used by wireless systems such as Wi-Fi and 5G for short-range, high-bandwidth communications (for example, 4K and 8K video streaming).
9. Backscatter Networking
Backscatter networking technology can send data with very low power consumption. This feature makes it ideal for small networked devices. It will be particularly important in applications where an area is already saturated with wireless signals and there is a need for relatively simple IoT devices, such as sensors in smart homes and offices.
10. Software-Defined Radio (SDR)
SDR shifts the majority of the signal processing in a radio system away from chips and into software. This enables the radio to support more frequencies and protocols. The technology has been available for many years, but has never taken off as it is more expensive than dedicated chips. However, Gartner expects SDR to grow in popularity as new protocols emerge. As older protocols are rarely retired, SDR will enable a device to support legacy protocols, with new protocols simply being enabled via software upgrade.
AI projects set to double
Organizations that are working with artificial intelligence (AI) or machine learning (ML) have, on average, four AI/ML projects in place, according to a recent survey by Gartner, Inc. Of all respondents, 59% said that they have AI deployed today.
The Gartner “AI and ML Development Strategies” study was conducted via an online survey in December 2018 with 106 Gartner Research Circle Members – a Gartner-managed panel composed of IT and IT/business professionals. Participants were required to be knowledgeable about the business and technology aspects of ML or AI either currently deployed or in planning at their organizations.
“We see a substantial acceleration in AI adoption this year,” said Jim Hare, research vice president at Gartner. “The rising number of AI projects means that organizations may need to reorganize internally to make sure that AI projects are properly staffed and funded. It is a best practice to establish an AI Center of Excellence to distribute skills, obtain funding, set priorities and share best practices in the best possible way.”
Today, the average number of AI projects in place is four, but respondents expect to add six more projects in the next 12 months, and another 15 within the next three years (see Figure 1). This means that in 2022, those organizations expect to have an average of 35 AI or ML projects in place.
Source: Gartner (JULY 2019)
Forty percent of organizations named CX as their top motivator to use AI technology. While technologies such as chat bots or virtual personal assistants can be used to serve external clients, most organizations (56%) today use AI internally to support decision making and give recommendations to employees. “It is less about replacing human workers and more about augmenting and enabling them to make better decisions faster,” Mr. Hare said.
Automating tasks is the second most important project type — named by 20% of respondents as their top motivator. Examples of automation include tasks such as invoicing and contract validation in finance or automated screening and robotic interviews in HR.
The top challenges to adopting AI for respondents were a lack of skills (56%), understanding AI use cases (42%), and concerns with data scope or quality (34%). “Finding the right staff skills is a major concern whenever advanced technologies are involved,” said Mr. Hare. “Skill gaps can be addressed using service providers, partnering with universities, and establishing training programs for existing employees. However, establishing a solid data management foundation is not something that you can improvise. Reliable data quality is critical for delivering accurate insights, building trust and reducing bias. Data readiness must be a top concern for all AI projects.”
Measuring the Success of AI Projects
The survey showed that many organizations use efficiency as a target success measurement when they seek to measure a project’s merit. “Using efficiency targets as a way of showing value is more prevalent in organizations who say they are conservative or mainstream in their adoption profiles. Companies who say they’re aggressive in adoption strategies were much more likely instead to say they were seeking improvements in customer engagement,” said Whit Andrews, distinguished vice president, analyst at Gartner
Worldwide IT spending is projected to total $3.74 trillion in 2019, an increase of 0.6% from 2018, according to the latest forecast by Gartner, Inc. This is slightly down from the previous quarter’s forecast of 1.1% growth.
“Despite uncertainty fueled by recession rumors, Brexit, trade wars and tariffs, we expect IT spending to remain flat in 2019,” said John-David Lovelock, research vice president at Gartner. “While there is great variation in growth rates at the country level, virtually all countries tracked by Gartner will see growth in 2019. Despite the ongoing tariff war, North America IT spending is forecast to grow 3.7% in 2019 and IT spending in China is expected to grow 2.8%.”
“Although an economic downturn is not the likely scenario for either 2019 or 2020, the risk is currently high enough to warrant preparation and planning. Technology general managers and product managers should plan out product mix and operational models that will optimally position product portfolios in a downturn should one occur,” said Mr. Lovelock.
The enterprise software market will experience the strongest growth in 2019, reaching $457 billion, up 9% from $419 billion in 2018 (see Table 1). CIOs are continuing to rebalance their technology portfolios, shifting investments from on-premises to off-premises capabilities.
Table 1. Worldwide IT Spending Forecast (Billions of U.S. Dollars)
Data Center Systems
Source: Gartner (July 2019)
As cloud becomes increasingly mainstream over the next few years, it will influence ever-greater portions of enterprise IT decisions, in particular system infrastructure. Prior to 2018, more of the cloud opportunity had been in application software and business process outsourcing. Over this forecast period it will expand to cover additional application software segments, including office suites, content services and collaboration services. “Spending in old technology segments, like data center, will only continue to be dropped,” said Mr. Lovelock.
Globally, consumer spending as a percentage of total spend is dropping every year in every region due to saturation and commoditization, especially with PC, laptops and tablet devices. Cloud applications allow these devices to have an extended life, with less powerful equipment needed to run new software. This is why the devices market will experience the strongest decline in 2019, down 4.3% to $682 billion in 2019.
“There are hardly any ‘new’ buyers in the devices market, meaning that the market is now being driven by replacements and upgrades,” said Mr. Lovelock. “Add in their extended lifetimes along with the introduction of smart home technologies and IoT, and consumer technology spending only continues to drop.”
A recent International Data Corporation (IDC) survey of global organizations that are already using artificial intelligence (AI) solutions found only 25% have developed an enterprise-wide AI strategy. At the same time, half the organizations surveyed see AI as a priority and two thirds are emphasizing an "AI First" culture.
"Organizations that embrace AI will drive better customer engagements and have accelerated rates of innovation, higher competitiveness, higher margins, and productive employees. Organizations worldwide must evaluate their vision and transform their people, processes, technology, and data readiness to unleash the power of AI and thrive in the digital era," said Ritu Jyoti, program vice president, Artificial Intelligence Strategies.
The primary drivers behind these organizations' AI initiatives were to improve productivity, business agility, and customer satisfaction via automation. Faster time to market with new products and services was another leading reason for implementing AI. The cost of AI solutions, a lack of skilled personnel, and bias in the data were identified as the primary factors holding back the implementation of AI technology in these organizations.
Other key findings from the survey include:
"For many organizations, the rapid rise of digital transformation has pushed AI to the top of the corporate agenda. However, as AI accelerates toward the mainstream, organizations will need to have an effective AI strategy aligned with business goals and innovative business models to thrive in the digital era," noted Jyoti.
Spending on customer experience (CX) was reported at $97 billion in 2018 and is expected to increase to $128 billion by 2022, growing at a healthy 7% five-year CAGR, according to the International Data Corporation (IDC) Worldwide Semiannual Customer Experience Spending Guide. The European industries spending the most on CX in 2019 will be banking, retail, and discrete manufacturing. Together, these verticals will absorb 33% of the European CX spend this year. Retail will also have the fastest growing spend on CX throughout 2022, outgrowing banking by 2021.
Customer care and support, digital marketing, and order fulfillment are the use cases with the highest spending in CX today and will continue to be a strong investment area throughout 2022. Investing in CX represents a clear opportunity for industries to differentiate, implementing these use cases to mold a public brand perception around the customer, improving websites, social media interactions, and product and service promotions. Looking at long-term opportunities, omni-channel content will be the fastest growing CX use case by 2022, with European companies focusing on this space to build organizational experience delivery competency, leveraging investments in content and experience design, to lower the cost of supporting new channels and ensure brand consistency. Omni-channel content reflects the core foundation of the future of CX through the optimization of content across channels at every point in the customer journey, creating a non-linear experience around the user.
Emerging technologies (such as AI, IoT, and ARVR) and hyper-micro personalization are fueling investments in CX together with rising customer expectations, intensified competition, ever-changing customer behaviors, and stronger demand for personalization. The innovations in CX are about centering the experience of a product or service around the user, approaching each customer as an individual in real time and moving away from segment-based approaches to customer engagement.
"Customer Experience is the top business priority for European companies in 2019," said Andrea Minonne, senior research analyst, IDC Customer Insight & Analysis in Europe. "Businesses are moving from traditional ways of reaching out to customers and are embracing more digitized and personalized approaches to delivering empathy where the focus is on constantly learning from customers. As a customer-facing industry, retail spend on CX is moving fast as retailers have fully understood how important it is to embed CX in their business strategy."
Worldwide spending on public cloud services and infrastructure will more than double over the 2019-2023 forecast period, according to the latest update to the International Data Corporation (IDC) Worldwide Semiannual Public Cloud Services Spending Guide. With a five-year compound annual growth rate (CAGR) of 22.3%, public cloud spending will grow from $229 billion in 2019 to nearly $500 billion in 2023.
"Adoption of public (shared) cloud services continues to grow rapidly as enterprises, especially in professional services, telecommunications, and retail, continue to shift from traditional application software to software as a service (SaaS) and from traditional infrastructure to infrastructure as a service (IaaS) to empower customer experience and operational-led digital transformation (DX) initiatives," said Eileen Smith, program director, Customer Insights and Analysis.
Software as a Service (SaaS) will be the largest category of cloud computing, capturing more than half of all public cloud spending in throughout the forecast. SaaS spending, which is comprised of applications and system infrastructure software (SIS), will be dominated by applications purchases. The leading SaaS applications will be customer relationship management (CRM) and enterprise resource management (ERM). SIS spending will be led by purchases of security software and system and service management software.
Infrastructure as a Service (IaaS) will be the second largest category of public cloud spending throughout the forecast, followed by Platform as a Service (PaaS). IaaS spending, comprised of servers and storage devices, will also be the fastest growing category of cloud spending with a five-year CAGR of 32.0%. PaaS spending will grow nearly as fast (29.9% CAGR) led by purchases of data management software, application platforms, and integration and orchestration middleware.
Three industries – professional services, discrete manufacturing, and banking – will account for more than one third of all public cloud services spending throughout the forecast. While SaaS will be the leading category of investment for all industries, IaaS will see its share of spending increase significantly for industries that are building data and compute intensive services. For example, IaaS spending will represent more than 40% of public cloud services spending by the professional services industry in 2023 compared to less than 30% for most other industries. Professional services will also see the fastest growth in public cloud spending with a five-year CAGR of 25.6%.
On a geographic basis, the United States will be the largest public cloud services market, accounting for more than half the worldwide total through 2023. Western Europe will be the second largest market with nearly 20% of the worldwide total. China will experience the fastest growth in public cloud services spending over the five-year forecast period with a 49.1% CAGR. Latin America will also deliver strong public cloud spending growth with a 38.3% CAGR.
Very large businesses (more than 1000 employees) will account for more than half of all public cloud spending throughout the forecast, while medium-size businesses (100-499 employees) will deliver around 16% of the worldwide total. Small businesses (10-99 employees) will trail large businesses (500-999 employees) by a few percentage points while the spending share from small offices (1-9 employees) will be in the low single digits. All the company size categories except for very large businesses will experience spending growth greater than the overall market.
SD-WAN market to reach $5.25 billion in 2023
International Data Corporation (IDC) has published two new research reports on the fast-growing Software Defined Wide Area Network (SD-WAN) infrastructure market. This important segment of the enterprise networking market will grow at a 30.8% compound annual growth rate (CAGR) from 2018 to 2023 to reach $5.25 billion, according to IDC's SD-WAN Infrastructure Forecast. The IDC Market Shares report includes 2017 and 2018 revenues by vendor for SD-WAN infrastructure.
"SD-WAN continues to be one of the fastest-growing segments of the network infrastructure market, driven by a variety of factors. First, traditional enterprise WANs are increasingly not meeting the needs of today's modern digital businesses, especially as it relates to supporting SaaS apps and multi- and hybrid-cloud usage. Second, enterprises are interested in easier management of multiple connection types across their WAN to improve application performance and end-user experience," said Rohit Mehra, vice president, Network Infrastructure. "Combined with the rapid embrace of SD-WAN by leading communications service providers globally, these trends continue to drive deployments of SD-WAN, providing enterprises with dynamic management of hybrid WAN connections and the ability to guarantee high levels of quality of service on a per-application basis."
The SD-WAN infrastructure market continues to be highly competitive with sales increasing 64.9% in 2018 to $1.37 billion. Incumbent networking vendors have leveraged their technological strengths and installed bases in routing and WAN optimization sales to lead the market, while numerous start-ups remain active. IDC finds that Cisco holds the largest share of the SD-WAN infrastructure market, fueled by its extensive routing portfolio that is used in SD-WAN deployments, as well as its Meraki portfolio and its SD-WAN management platform powered by technology it acquired from Viptela in August 2017. VMware, with its SD-WAN service powered by VeloCloud (which VMware acquired in December 2017), holds the second largest market share in the SD-WAN infrastructure market, followed by Silver Peak, Nokia-Nuage, and Riverbed.
(This issue of Digitalisation World includes a major focus on Software-Defined-Networking).
The value of Europe's managed services contracts in the second quarter exceeded €2.7 bn (£2.4bn) for the second quarter running, indicating a potential return to pre-2015 spending levels, according to the latest state-of-the-industry report from US-based researcher Information Services Group.
The research is based on the EMEA ISG Index, which measures commercial outsourcing contracts with annual contract value (ACV) of €4m or more. It shows the region's combined first-half ACV, including both managed services and as-a-service contracts, was up 12%, to €8.9bn.
Managed services, with two straight quarters exceeding €2.7bn in ACV, reached €5.7bn in the first half, up 10% against the softer 2018 period. Within managed services, IT outsourcing (ITO) rose 12%, to €4.7bn, while business process outsourcing (BPO) was up 1 percent, to €1.0bn. The second quarter was the third quarter in the last five that managed services ACV exceeded €2.7bn.
With continuing strong demand for cloud-based solutions, as-a-service ACV climbed 17%, to €3.2bn, in the half. Infrastructure-as-a-Service (IaaS) surged 19%, to €2.4bn, while Software-as-a-Service (SaaS) rose 9%, to €846m.
Managed services ACV, meanwhile, dropped 1 percent in the second quarter, to €2.9bn, but the number of contract awards reached 204, up 10% over the prior year. This may show that the managed services market is reaching further into the SMB sector, with the consequent smaller deals.
Steve Hall, partner and president of ISG, said: "For the past couple of quarters, we've been bracing for the impact of some of the macro risks that could affect the global economy — Brexit, tariffs and trade wars. But the talk about overall market growth has turned surprisingly positive. There continue to be recession concerns in European markets, especially the UK and Germany, but overall technology spend remains robust. With the Brexit deadline pushed farther out once again, uncertainty has become the new normal for many businesses in the UK, and companies seem to be making strategic adjustments."
Globally, managed services ACV was down 3% in the second quarter against a very strong 2018 period, so in uncertain times, managed services may be being seen as giving an element of control, with the ability to scale, both up and down.
ISG finds UK companies are continuing to exercise caution in their managed services investments, instead focusing spending on new technologies that will increase their agility and efficiency
"Technology spending remains robust, and technology providers remain positive about tech spend in the near term," said Hall. "We are projecting 22% year-on-year revenue growth for the remainder of 2019 in the global as-a-service market. This takes into account a slightly more optimistic view of the SaaS segment and factors in some uncertainty in IaaS. In the managed services market, we also have an optimistic perspective and are raising our growth forecast to 3.5% through to the end of the year.
The Managed Services & Hosting Summits are firmly established as the leading Managed Services event for the channel. Now in its ninth year, the London Managed Services & Hosting Summit 2019 on September 19 aims to provide insights into how managed services continues to grow and change as customer demands expand suppliers into a strategic advisory role, and the pressures for compliance and resilience impact the business model at a time of limited resources. Managed Service Providers, other channels and their suppliers can evolve new business models and relationships but are looking for advice and support as well as strategic business guidance.
The Managed Services & Hosting Summits feature conference session presentations by major industry speakers and a range of sessions exploring both technical and sales/business issues.
Reflecting the transformational nature of the enterprise technology world which it serves, this year’s 10th edition of Angel Business Communications’ premier IT awards has a new name. The SVC Awards have become... the SDC Awards!
10 years ago, SVC stood for Storage, Virtualisation and Channel – and the SVC Awards focused on these important pillars of the overall IT industry. Fast forward to 2019, and virtualisation has given way to software-defined, which, in turn, has become an important sub-set of digital transformation. Storage remains important, and the Cloud has emerged as a major new approach to the creation and supply of IT products and services. Hence the decision to change one small letter in our awards; but, in doing so, we believe that we’ve created a set of awards that are of much bigger significance to the IT industry.
The SDC (Storage, Digitalisation + Cloud) Awards – the new name for Angel Business Communications’ IT awards, which are now firmly focused on recognising and rewarding success in the products and services that are the foundation for digital transformation!
The SDC Awards 2019 feature a number of categories, providing a wide range of options for organisations and individuals involved in the IT industry to participate.
Our editorial staff will only validate entries ensuring they have met the entry criteria outlined for each category. We will then announce the ‘shortlist’ to be voted on by the readers of the Digitalisation World stable of titles. Voting takes place in October and November. The selection of winners is based solely on the public votes received. The winners will be announced at a gala evening event at London’s Millennium Gloucester Hotel on 27 November 2019.
If you’d like to enter your company for one or more category, you can do so at:
DW talks to Gunter Reiss, VP of Strategy at A10 Networks, about the promise of 5G – when and where it will be available, the application sweet spots and, as with all things IT-related, the security aspect.
1. Please can you provide some brief background on A10 Networks?
Founded in 2004, A10 Networks provides Reliable Security Always™, with a range of high-performance, hyperscale and AI-automated cyber security solutions that help organisations ensure that their business-critical applications and networks remain highly available, accelerated and secure.
CEO and Co-Founder, Lee Chen’s aim when starting A10 Networks was on building the fastest, highest performing and most scalable load-balancer in the industry, based on X86 Intel off-the-shelf hardware and a shared memory architecture. That was achieved with an Application Delivery Controller (ADC) for providing Layer 4-7 load balancing for the largest telcos, cloud providers and enterprises across the globe.
2. What have been some of the company’s recent milestones?
A10 Networks now has more than 7,000 customers in 80 countries, including 10 of the top-10 telecom operators, 2 of the top-3 cloud providers, 5 of the top-10 media companies and 8 of the top-12 gaming companies. It works with more than 50 of the world’s top technology partners to deliver the best in application delivery, security, intelligent automation and analytics, all at hyperscale.
The company is focused on the performance, scalability and agility for the largest multi-cloud and 4G/5G networks leveraging built-in connected intelligence and analytics for smarter cybersecurity protection throughout mission-critical networks.
A10 Networks serves customers in both the service provider and enterprise space with security solutions including the Thunder Convergent Firewall, for which it has won seven 5G security design wins with major service providers; the highest-performance and ML-based DDoS detection and mitigation with Thunder Treat Protection System (TPS); SSL Insight SSL/TLS decryption and high-performance application delivery controller with deep insights and analytics with the A10 Harmony Controller.
Recent product introductions include:
· Industry-leading 500 Gbps Thunder DDoS Defense System
· A10 Networks shipped the leading 100 Gbps NFV solution for 5G application services and security
· Thunder ADC and Harmony Controller are now available in the Oracle Cloud Marketplace
3. And how does A10 Networks distinguish itself in what’s quite a busy market?
A10 Networks’ solutions are proven. The company leads the industry by offering AI-driven application and network security for the largest multi-cloud enterprises and 4G/5G networks, all at hyperscale. With built-in connected intelligence and analytics across its solutions, customers gain access to smarter and automated security protection throughout their mission-critical networks. Applied artificial intelligence, machine learning and predictive analytics ensure that applications and networks are protected against today’s threats and are well prepared for tomorrow’s threats.
A10 Networks provides a myriad of deployment options to fit IT teams’ preference and performance requirements, ranging from appliances, virtual machines, bare metal to highly scalable Kubernetes containers to meet the requirements of any enterprise operating in a complex multi-cloud environment or service providers as they transition to 5G networks. By offering flexible licensing, customers benefit from paying only for the capacity of services they are using across their entire distributed data center and multi-cloud architecture.
4. Finally, can you give us a brief insight into the company’s product and technology portfolio?
A10 Networks provides a portfolio of security and application services that are essential to digital business today. Key to the product portfolio is our drive to increase security efficacy, automate complex processes and provide intelligence, all at hyperscale, ensuring businesses can be agile and operate efficiently. Cutting-edge technologies are key for our differentiation and often the reason we are chosen over traditional vendors.
For example, the A10 Networks Harmony Controller provides the intelligent automation, multi-cloud management and predictive analytics for our products, simplifying and improving operations. On the security side our Thunder TPS product line showcases our artificial intelligence via its machine learning capabilities that thwart known and unknown attack vectors automatically.
On the Kubernetes containers and micro-services side, our Secure Service Mesh solution is showcasing the move to agile, software-based solutions. In addition to these cutting-edge solutions, we have thousands of customers utilising our other core products: our flagship Thunder ADC (application delivery controller) for multi-cloud load balancing, Thunder CGN (carrier-grade networking) for IPv4 preservation and IPv6 migration, Thunder SSLi (SSL Insight) for outbound TLS/SSL decryption visibility, and our Thunder CFW (Convergent Firewall), which includes a secure web gateway and our 5G security solutions.
The 5G solutions are picking up momentum as we see mobile service providers increase their Gi-LAN and mobile-edge security and scale requirements in anticipation of the exponential increase in IoT and 4G and 5G devices.
5. Moving on to the promise of 5G, can you explain what the technology is, and what are the benefits it offers to users?
The established goal of 5G is a focus on enhanced mobile broadband, machine to machine type communication and ultra-reliable, low-latency communication – far beyond the current levels of performance and bandwidth available with 4G/LTE networks. The transition to 5G will bring about a whole new set of possible and exciting use cases for consumers and enterprises. In a recent survey commissioned by A10 Networks, smart cities, industrial automation, ultra-high-speed connectivity and connected vehicles were all cited as key use-case drivers for 5G.
6. And how does 5G differ from 4G and LTE etc.?
5G standards differ from 4G/LTE networks in latency, performance and bandwidth requirements. For instance, 4G supports today mobile broadband connectivity in the 100s of Mbps, latencies of 60 – 90 milliseconds and tens of thousands of IoT devices per square kilometre. With 5G, we will reach speeds of up to 10 Gbps, latencies of 1 – 3 milliseconds and connect millions of IoT endpoints within a square kilometre. It is a massive improvement over 4G and will offer many new use cases for consumers and enterprises that we cannot even imagine at this point.
At the same time, 5G will also require more sophisticated security solutions given the increased scale and latency requirements. As more and more devices, including IoT devices access 5G networks, there’s a greater potential for larger and more frequent DDoS attacks, as an example.
7. 5G has been talked about for a while, and there are a handful of pilot projects, but when will the technology go mainstream?
5G will be more like a journey rather than a destination. Major commercial 5G projects have already been rolled out in more than 20 countries worldwide, from Korea, Japan, Australia, Middle East to the US. South Korea was the first one to pilot 5G at the Winter Olympics in Pyeongchang in February 2018. Since then, LGU+, KT and SKT have commercially launched the 5G service and are trailblazers in the industry. Korea Telecom (KT) has developed new consumer, industrial and enterprise 5G use cases like mobile live broadcasting, virtual private cloud services for the enterprise, cloud-based VR and AR gaming, autonomous-driven trucks and the world’s first 5G shipyard in collaboration with Hyundai Heavy Industries.
Having said that, many countries are still in their planning phase and are targeting a commercial launch not until later this year and in 2020. The majority of networks will be a combination of 4G/LTE and 5G for initial years to come and it’s anticipated that 5G will become more ubiquitous in the next 4 to 6 years.
8. And what companies will be the driving force behind it – the telcos/mobile operators, the hyperscalers, general IT vendors?
Different from 4G, where technology providers worked closely with mobile operators, 5G is driven by an ecosystem of mobile operators, vertical industries, cloud providers, IoT providers and even governments and the standardisation bodies. It will take an entire ecosystem of partners to make 5G networks a success for operators, enterprises and subscribers alike.
9. Does this mean that 5G will mainly be developed, at least initially, for sweet spots, such as metropolitan areas, inside buildings and factories, rather than across a whole country?
According to our recent survey of operators, smart cities, industrial automation, ultra-high-speed connectivity and connected vehicles were all cited as key use-case drivers for 5G. Certainly, major operators are rolling out 5G services in cities to maximise the return of their investments and offer new services to various vertical industries and a larger amount of consumers.
However, and as we see in the US, the new 5G fixed wireless home service is being deployed right now in urban and some rural areas. The cost advantages over laying fiber are enormous. Providing connectivity across rural communities that are currently underserved by broadband services will be a high priority for US operators. European countries where fiber penetration is low today will see a similar trend.
10. Talking of sweet spots, smart cities are an obvious beneficiary of 5G?
Smart cities have the potential to make the consumer experience of things like driving, purchasing goods and ability to get consistent and reliable connectivity a reality. Imagine safer driving as your connected car can tap into traffic patterns and anticipate stopped or slowed traffic, allowing you to stop in time or change your route; or walking into the new Amazon store and being able to make purchases without ever waiting in a line for a cashier.
Envision 5G-connected streetlights with CCTV cameras that will send live video streams to law enforcement and city control centers and leverage AI and cloud services to respond swiftly to public incidents. 5G will be instrumental in creating a safer urban way of living in the future.
11. And IoT applications will benefit significantly from 5G availability?
IoT devices are central to industrial, retail and agricultural automation. 5G networks will only expand the use of IoT devices from current levels. In particular, 5G will support millions of IoT devices within a square kilometre, not just a few thousand like 4G. And with this explosion of new devices accessing the network, the attack surface expands exponentially.
Mobile operators will have to upgrade their networks with the latest security tools like firewalls and distributed denial of service (DDoS) protection to ensure the network is protected from malware, intrusions and DDoS attacks so that 5G services can be delivered with zero interruption.
12. And connected vehicles are another 5G sweet spot?
Car manufacturers like BMW, Volvo and Mercedes have already been exploring and testing new capabilities on 5G networks in collaboration with the technology providers over several years. According to an independent 5G survey sponsored by A10 Networks in 2019, mobile operators expect that automotive industry is poised to be the top industry to be disrupted by 5G. For example, rather than having a DVD player installed in the car as is currently the case, passengers can view video steaming services brought to them over a 5G network. Connected cars could also lead to improved traffic management and safety features.
Furthermore, driverless trucks have been tested by manufacturers in Korea and Sweden over the past few months and we will see commercial roll-outs of such service over the next few years in various parts of the world.
13. How will smart manufacturing benefit from 5G?
As noted above, the growing use of IoT devices combined with the 5G network will help drive new applications and uses in smart manufacturing, and this has the potential to enable manufacturers to drastically improve supply chain efficiencies, increase profits and evaluate value propositions, according to a report by Deloitte. In the middle of last year, China announced its Made in China 2025” (MIC 2025) policy with huge investments to drive automation and intelligent manufacturing. This initiative will coincide with, and will most certainly take advantage of, the transition to 5G. A recent study from Ericsson estimated 5G-powered smart factories will create an £87bn market for telcos by 2026.
In other parts of the world, companies are already testing smart factories. In Nokia’s “factory of the future” in Oulu, Norway, it is manufacturing 5G base stations. Nokia calls it a “conscious factory” where it is using data analytics and a high level of automation that make the whole supply chain flexible and more easily adaptable to new product lines. Connectivity for this factory, interestingly, is using 4G/LTE right now, demonstrating that the potential for smart manufacturing with the conversion to 5G is massive.
In the UK, Worchester Bosch has created a 5G test bed to evaluate how 5G can help improve factory output using IoT sensors and data analytics to predict equipment failures and prevent downtime. Manufacturer Yamazaki Mazak is also using 5G tests to evaluate scenarios where, for example, senior engineers can remotely guide onsite engineers through machine maintenance.
14. Any other obvious 5G winners in the early days – cloud applications, for example?
There is no doubt that cloud services players like Microsoft, Google, Apple, Netflix, Uber and others continue to test new applications and services leveraging the ultra-high-speed connectivity and low-latency advantages of 5G. With mobile operators starting to offer new cloud, media streaming and IoT services as well, it is quite obvious that cloud apps and services will be a highly competitive environment with plenty of room for innovation over the next decade.
Another vertical industry which will take advantage of 5G is retail. Wi-Fi and 4G never lived up to the promise of a more virtual retail experience. In contrast, combining 5G with augmented reality, virtual reality, artificial intelligence and new cloud services will change the way we purchase fashion and consumer goods or even cars forever.
15. Any thoughts on when 5G will become pervasive – i.e. right now there are plenty of areas in the UK and many other countries where getting 4G is something of a struggle?
It will take the next 5 to 8 years that 5G will become more pervasive around the globe. It will be faster than 4G roll-outs since the opportunity for mobile operators to build new revenue-generating services for a plethora of vertical industries beyond ultra-highspeed broadband for consumers is tremendous. The 5G commercial roll-outs and plans for full coverage differ by country and continents. South Korea was one of the first countries to launch a commercial 5G network and has covered more than 85 cities today. Full coverage is anticipated by 2022.
Vodafone just announced that first 5G services will be available in seven cities in early July. It is expected that covering the entire UK will take at least another 4 to 5 years. In the US, the FCC chairman just announced that he will support the merger of Sprint and T-Mobile US as long as the new mobile operator commits to providing 99 percent of 5G coverage across the US within the next six years.
16. What impact will 5G have on other aspects of IT and telecoms infrastructure – for example, 5G requires network virtualisation (NFV and SD-WAN)?
The transition to 5G requires that mobile operators move to a distributed telco cloud and network function virtualisation. All network functions will run on top of a software-defined infrastructure with zero-touch and end-to-end automation and service assurance. Essentially, mobile operators are flattening the infrastructure to build a more agile and elastic cloud network like Microsoft, Google or Facebook. This is the only way to reduce infrastructure and operational cost and support low-latency requirements of one to three milliseconds vital for use cases like autonomous-driven cars or remote surgeries. In addition, since 5G will play a key role in basically every single vertical industry, we will see enterprises building new use cases to gain an advantage over their competitors.
17. Presumably, 5G changes the rules when it comes to IT security?
Yes, as noted above, the transition to 5G with the requirements for high-bandwidth, low-latency and virtualised infrastructures, raises the bar substantially and operators must fortify their networks against the potential of a massive proliferation of DDoS attacks coming into the network through multiple entry points. Detection and mitigation of these attacks in less than one to three milliseconds is an absolute requirement. Today’s DDoS protection solutions will detect and mitigate attacks within seconds and minutes. Hence, DDoS protection is not just a nuisance in 5G but may, in fact, be a matter of life and death with the deployment of life-critical services like remote medicine and driverless and connected cars.
18. What impact will this security aspect have, and how can end users prepare for this?
While operators need to protect their infrastructures from cyberattacks to ensure availability and reliability of services, end users need to continue to use strong security practices to ensure their devices are not used to spread malicious attacks, such as Malware and become weaponised for a DDoS attack.
19. 4G never quite lived up to the hype, why should 5G be any different?
4G was all about delivering mobile broadband at higher speeds – smartphones and apps, and video and audio streaming demanded faster speeds and better user experience for the consumer and enabled BYOD in the enterprise. From that perspective, it has served its purpose. But, as noted in a study done by Ericsson and Arthur D. Little in 2018, operators quickly realised that while usage increased dramatically, revenue per user did not – the overall CAGR was only 1.5 percent.
As mentioned above, 5G will not only be about mobile operators providing more broadband at higher speeds. It will drive whole new business models and industries to sell new services to the enterprise and to collaborate with vertical industries, cloud providers, IoT providers and even governments. It’s predicted that there is a $1.3 trillion global market opportunity for information, communications and technology providers by 2026 with a CAGR of 13.3 percent as noted in the same study done by Ericsson and Arthur D. Little in 2018.
20. Finally, what is A10 Networks’ role in the 5G space right now?
Capacity demands rise significantly in 5G networks due to substantial increases in concurrent sessions, extreme reliability and higher connections per second. As a result, legacy Gi-LAN firewalls cannot handle the exponentially increase in traffic. A10 Networks is uniquely positioned to help mobile operators fortify their networks against cyberattacks, including DDoS attacks with its high-performance Thunder Convergent Firewall and DDoS protection solution available as appliance, virtual machine, bare metal and even running in Kubernetes containers. The solution outperforms competitors and helps customers to deploy security at hyperscale within their new distributed telco cloud environment.
Data centres comprise a complex and changing collection of IT and communications equipment housed within an environment that’s also multifaceted and challenging. Yet a host enterprise’s viability often depends critically on its data centre’s uninterrupted availability as a resource. Accordingly, the centres’ managers must have sophisticated tools to give them visibility and control of equipment and environment status at all times, even when they’re remote from the equipment.
By Alex Emms, Operations Director at Kohler Uninterruptible Power
However, while these tools may be sophisticated, it’s not always necessary or even desirable to set up an entirely automated closed loop control environment. In Kohler Uninterruptible Power’s experience with UPS installations, for example, an element of manual intervention can be desirable if not essential.
In this article, Alex Emms, Operations Director at Kohler Uninterruptible Power discusses the role of data centre systems management, and explains why one-way electronic communication is, sometimes, the best option.
What do we need to monitor?
To optimise a remote monitoring and control management strategy, we first need to identify the tactical and strategic benefits that we want. Examples of tactical information include reported battery temperature, voltage and resistance values; any excessive levels warn users of impending fault conditions, allowing corrective action to be taken. More strategic information would include reports on power consumption, where power is being consumed, and variations in processing load. This informs longer-term planning for the allocation and distribution of power within the data centre, and can provide opportunities for reducing wasted capacity.
Monitoring of environmental variables, particularly temperature and humidity, is essential to ensure that cooling strategies remain effective. It’s also important to monitor less obviously process-related factors, such as security and access, to protect the well-being of the UPS and other data centre equipment.
Why the data centre environment is so challenging
While reasons such as these make a management strategy desirable, achieving it isn’t necessarily straightforward; there isn’t always knowledge of the equipment currently deployed, or of its status. The data centre’s equipment population is usually the result of ad-hoc growth to meet developing demand, with older equipment being exchanged for newer, more powerful upgrades. Poorly-managed additions and normal employee turnover can erode knowledge of the equipment that’s installed; there can be ‘zombie servers’ that consume power and space while contributing little or nothing to the data centre’s productivity.
With the advent of virtualisation, processing loads as well as hardware become highly variable. They can rapidly change and switch location within the data centre as virtual machines are deployed to meet variations in user demand.
The role of DCIM
While these issues create barriers to understanding the status of a data centre and its equipment, overcoming them is essential. The consequences of an IT failure can be extremely serious, with interruptions to service, possible hardware damage, loss of data and impact on reputation. Apart from the risk of failures, a lack of visibility denies users the opportunity to improve power efficiency – an increasingly critical requirement for both commercial and social responsibility reasons.
Fortunately, this situation, though challenging, is widely recognised, and solutions, in the form of data centre infrastructure management (DCIM) systems, exist. These provide access to accurate, actionable data about a data centre’s current state and future needs; critically, they can also exchange information with building management systems to provide a more comprehensive, higher-level overview of data centre status.
Operational sustainability and its impact on data centre availability
The Uptime Institute (UI) is best-known for its development of the Tier Classification system, which allows stakeholders to efficiently and accurately align their data centre’s availability level with its business requirements. However, the UI recognises that the long term availability of a data centre infrastructure is not guaranteed by Tier level alone; it is also based on its operational sustainability. Like DCIM as described above, their operational sustainability concept extends to the data centre building as well as its ICT equipment, and defines the behaviours and risks, beyond Tier classification, that affect data centre uptime.
According to the UI, the three elements of operational sustainability, in order of decreasing impact to operations, are Management & Operations, Building Characteristics, and Site Location. Their Abnormal Incident Reports (AIR) database reveals that the leading cause of reported data centre outages are directly attributable to shortfalls in management, staff activities, and operations procedures.
This poses the question; how can you set up a remote management system to eliminate or minimise these problems of human behaviour? As we shall see, in Kohler Uninterruptible Power’s experience, the answer doesn’t necessarily lie in higher levels of automation.
Impact on UPS remote monitoring and management strategies
Monitoring and control of UPSs must be part of any DCIM strategy. This increasingly involves an element of remote communications, as many organisations’ data infrastructure now includes ‘edge’ micro data centres, so-called because they are located out at the edge of an enterprise, geographically distant from any operations centre.
These locations may be to place the data centres close to the point at which data is being generated, avoiding the need to send large volumes of raw data over long distances. However, sustainability and green energy can also be factors. The WindCORES project, for example, has deployed data centres into wind turbine structures; these data centres are powered over 90 percent by wind. With plenty of space inside many wind turbine towers for IT and infrastructure equipment, this sets a path for low-emissions distributed data centres of the future.
Irrespective of the reason, the result is a proliferation of widely-distributed small or micro data centres, many of which are unmanned. A remote monitoring and control management system, which improves data centre reliability and efficiency through a comprehensive DCIM as described, may appear as an ideal solution in such circumstances. This also applies where a remote support resource is being used to monitor a larger data centre hub.
However, in Kohler Uninterruptible Power’s experience, this shouldn’t necessarily include automating the control aspect. Firstly, there can be a mistrust of two-way communications systems, and they are seen as a security risk in some organisations. There has been more than one instance of communications equipment manufacturers being blacklisted over concerns about data misuse and associated security risks.
Even without this concern, KUP’s experience has shown that if remote access and control of a UPS is too generally availability, damage can be inflicted either by carelessness or malign intent. Better security can be promoted through using one-way communications solutions. The UPSs should remain closely monitored, but the reaction to a fault should be a phone call or email to alert an authorised technician located near to the UPS. This makes it easier to limit access to the appropriate personnel.
Nevertheless, once such a warning has been flagged, an appropriate response is essential. Technicians need to arrive on site, even if remote, within an agreed timeframe, and equipped with the training, documentation, equipment and parts needed to effect any repairs.
This means that, when evaluating potential UPS vendors, it’s essential not to look only at the system’s functionality, performance and reliability. A review of their service team is equally important. Does it have the geographical coverage needed, and can it meet the criteria given above?
Although not strictly part of a remote monitoring or control strategy, provision of an effective scheduled maintenance strategy is also essential. By ensuring that the batteries and other UPS components are in top condition, UPS life will be extended. Additionally, dependence on any remote control strategy - however it’s implemented – will be reduced, along with exposure to failure.
In this article, we have seen how gaining accurate visibility and control of your data centre equipment can be challenging. This is because larger, hub data centres may have large quantities of equipment that’s poorly understood for one reason or another, while smaller, edge facilities are typically remotely located and unstaffed. Nevertheless, achieving visibility and control is essential, to spot latent problems, prevent disastrous failures, and optimise energy efficiency.
A popular answer is to deploy data centre infrastructure management (DCIM) systems, as these are designed to deliver the information that’s needed, in time to allow effective responses. By linking to building management systems (BMSs), users can obtain a holistic view of how the entire data centre is performing, to allow better-informed responses and strategic planning.
Kohler Uninterruptible Power recognises the critical requirement for a level of monitoring and control, whether it’s a full DCIM system or something simpler. However, based on their experience, they sound one note of caution; running a full monitoring and control system with two-way communications can pose a security risk. It may be better to deliberately build manual intervention into the control loop, to mitigate this issue.
The data centres that fulfil today’s digital demands can, in many instances, be placed into one of three categories. Large hyperscale facilities at the centre of the network, where high data volumes and low cost of storage are provided at the expense of some loss in access speed and latency; regional data centres, generally speaking smaller than the former but with better latency figures; and finally at the edge, smaller hyper-converged or micro data centres, closest to where data is generated, consumed and processed.
By Steven Carlini, VP, Innovation and Data Center, Secure Power Division, Schneider Electric.
At the edge of the network, data centres hosting applications where latency is a critical factor, or where speed of deployment in response to changing business demand, requires careful consideration. Inevitably, as technology improves and innovative use cases place more strain on the critical infrastructure, the speed, efficiency, reliability and security of applications at the edge must match those of the largest hyperscale facilities.
The move to the edge
Today there are many instances of edge use cases, they include healthcare providers, retail businesses and industrial applications. Several factors are of course driving the adoption of edge computing solutions; The Internet of Things (IoT), which is attaching network connectivity to more and more everyday items, will unleash a torrent of additional data traffic onto an already clogged network.
Another widely predicted phenomenon is the advent of the driverless car. If such a vision is to be realised it will require ultra-fast, ultra-reliable data centres throughout the road network. For safety reasons, if for nothing else, decisions taken at these edge data centres will have to be taken so quickly that the time taken to traverse the network even as far as a metropolitan or regional data centre will be too much.
Finally, edge will be the key enabler of 5G, which means the closer the loop between data generated by an IoT device and the data centre where it is processed, the better for traffic management - whether on the network or on the road.
However, a host of underlying technology developments is necessary to facilitate the growth of edge computing. Hardware components have to become more modular and standards based so that capacity can be added in the desired increments, and products from different vendors must work together seamlessly.
The ability to manage this growing number of facilities means that cloud-based Data Centre Infrastructure Management (DCIM) software must used in order to monitor these distributed environments and alert service personnel quickly, should an intervention be necessary. Furthermore, cost of operation must be kept to a minimum, meaning that data centres and the systems contained within them, must be as efficient as possible in terms of power consumption.
The role of Liquid Cooling
Other than the server, storage and networking equipment, the biggest concern for any data centre is the cooling effort required to keep the aforementioned equipment functioning. Modern facilities have paid particular attention to controlling airflows, whilst arranging aisle configurations and containment structures to maximise the efficiency of the cooling cycle. By eliminating hot spots and ensuring a separation between hot and cool air streams, it is possible to permit the ambient temperature of a data centre to rise, thereby reducing the general cooling requirement and allowing for more efficient and lower cost of operation.
Recent requirements for advanced processing power and server density however, has seen the re-emergence of liquid cooling, driven in part, by the increasing processing demands of today’s compute-intensive applications.
Artificial Intelligence (AI), is being deployed in support of real business and consumer applications, but because AI is so compute heavy, many IT hardware architects are using graphical processing units (GPUs) as core processors or as supplemental processing. The heat profile for many GPU-based servers is double that of traditional servers with a Thermal Design Point (TDP) of 300W vs 150W. Which, in turn, is bringing about a renaissance of sorts.
Liquid Cooling approaches
By today’s standards there are two basic approaches to liquid cooling: direct liquid cooling (DLC) or total liquid cooling (TLC). The first involves placing a small, fully sealed heat sink comprising a metal plate full of cool liquid on top of the server board or chip that needs cooling. As the liquid absorbs heat from the processing element, tubes connected to the plate transfer the liquid outside to a cooler that rejects the heat outdoors and routes the cooled fluid back to the heat sink. With this approach, it’s possible to absorb about 40% to 60% of the heat generated by a server.
The TLC approach involves no air-cooled components. Instead, the server is completely immersed in a dielectric fluid or mineral oil solution that absorbs heat. As much as an entire IT rack of servers can be sealed and contained in a tub full of fluid with network and power cabling hanging from rails above. All the heat generated by the servers is absorbed into the fluid and, once again, the fluid is continually pumped away to be cooled and returned.
Could such technologies be deployed at the edge? There is no doubt that some AI-based edge applications will require the same intensive computing power such as that delivered by GPUs, which might make the adoption of liquid cooling attractive. However, the immersion approach in particular complicates maintenance tasks as boards must be removed from the fluid and left to dry before service or replacement, which might militate against data centres without permanent on-site personnel.
As such, both TLC and DLC techniques make the compute environment far less susceptible to fluctuations in humidity and air quality, such as dust and particles. And the sealed immersion technique is especially well-suited for ruggedized applications such as harsh or outdoor environments. Due to the compute environment being fully sealed, there’s no concern about dust, sand or other contaminants getting in.
Another advantage is that because of the reduced need for fans or air cooling, a liquid cooled server could survive far longer in the event of a power outage. Even without replenishing the liquid supply, servers could expel heat for up to an hour or so before the liquid would become too hot to cool the load. That is typically plenty of time to restore main power, shift to backup or perform a controlled shut down of the IT equipment.
For unmanned edge data centres servicing autonomous cars, 5G or compute intensive AI software, the environmental advantages of liquid cooling are obvious and may well be something we see become more prominent in the critical infrastructure applications of the future.
DW talks to Matt Pullen, Managing Director, Europe at CyrusOne, about the company’s rapidly increasing data centre footprint across the globe, environmental issues, data centre design trends, intelligent automation. In fact, it seems we covered everything you’d ever want to know about trends within the data centre market!
1. Please can you provide some brief background on the company?
Founded in 2001, CyrusOne (NASDAQ: CONE) is a high-growth real estate investment trust (REIT) specialising in highly reliable enterprise-class, carrier-neutral data center properties. CyrusOne provides mission-critical data center facilities that protect and ensure the continued operation of IT infrastructure for approximately 1,000 customers, including more than 210 Fortune 1000 companies.
Headquartered in Dallas, Texas, CyrusOne operates more than 45 data center facilities across the United States, Europe, and Asia to provide customers with the flexibility and scale to match their specific IT growth needs. The company has sites in process across London, Frankfurt, Dublin and Amsterdam for a total prospective European footprint of nearly 500 megawatts.
2. And what have been the major company milestones to date?
In the past two years we have rapidly expanded our global footprint through a mixture of acquisition and strategic partnerships in key territories.
In October 2017, we formed a strategic partnership with GDS Holdings in China, which enables us to offer customers data center options in China’s top markets, including Beijing, Shanghai, Shenzhen, Guangzhou, and Chengdu.
We also signed an agreement with Brazilian data centre company ODATA in October 2018 which gave us a footprint in the Latin American market.
From a European perspective, the acquisition of Zenium Data Centers, which completed in August 2018, gave us an immediate footprint in Europe, with an existing data centre portfolio comprising four operating properties in London and Frankfurt.
We currently have development sites in process across London, Dublin, Amsterdam and Frankfurt, when combined with the existing Zenium portfolio, we anticipate a total prospective European footprint of nearly 250 MW by 2020. We also recently entered into a strategic agreement with Agriport in the Netherlands. Under the terms of the agreement, CyrusOne will have the option to purchase up to 33 hectares of land, in parcels as demand dictates, for a 270 MW master planned multi-data center campus
3. Briefly, can you summarise how CyrusOne aims to distinguish itself in what’s quite a crowded data centre colocation space?
We can build data centers faster and at a lower cost than anyone, including all of the large cloud companies, as proven by our 6-month 30 MW build in Northern Virginia.
We have 9 of the 10 largest cloud companies as customers. They are all challenged trying to forecast customer demand and in turn their capacity requirements. Our ability to deliver large amounts of capacity very quickly solves these issues for them.
4. Recently, CyrusOne announced that both its London data centres would be operating on 100% renewable energy. What’s the decision behind this?
Sustainability is high on everyone’s agenda, both within business and in society. As large energy users in the UK, the data centre industry is in a strong position to effect change in terms of how green energy is priced and made available. Our customers are expecting more and we’re in a great position to do something about it.
Anything we can do to maximise efficiency and resource usage and be a little kinder to the environment in the process is important to us and we’ll be looking to implement the renewable energy tariffs more broadly across our portfolio in the future.
5. And can we expect the ‘green’ data centre message to be taken more seriously into the future?
We’re way past the point where CSR was a tick-box exercise in an annual report. Most organisations are now expected, and in some legally required, to make efficiency and carbon reduction a priority within every aspect of their operations.
Within the data centre industry specifically, green energy tariffs have traditionally been priced at a premium, but as more users demand their energy from renewable sources, the rates we can negotiate are decreasing rapidly, making it easier to implement these changes.
New cooling technology is also helping the cause, with indirect adiabatic air cooling which we’ve deployed at several of our data centre facilities in Europe, ensuring a low PUE and helping keep costs and consumption down.
Ultimately it is in our best interests to be as efficient as possible in our consumption of resources, since we can then pass these savings along to our customers.
6. A recent company hire suggests that CyrusOne is keen to grow its profile in the hyperscale and cloud markets?
It’s true that we have made a number of strategic hires as we continue to ramp up our expansion plans in key territories.
In the US, we recently announced the appointment of John McCloskey as our new Vice President of Hyperscale and Cloud Sales for CyrusOne. Based in Seattle, a key region for us, John will help to lead our growing portfolio of hyperscale cloud customers.
In Europe, we’ve also announced three new hires to add to our growing team, including a new area vice president, engineering solutions director, and business development manager. We’re moving at lightning speed in Europe, and the growth of our hyperscale and cloud business is a big part of that. Having the right local experience is a key part of our strategy to accelerate our growth further.
7. The company has also acquired more land in Silicon Valley?
We purchased eight acres of land to expand our data center footprint in Silicon Valley, which will be used to build our second facility in Santa Clara. When combined with the first Santa Clara facility, the campus will be capable of delivering more than 100MW of power capacity, which aligns with our overarching growth targets both across the US and in Europe.
8. Presumably, we can expect more real estate investments in the near future?
We do not specifically comment of future investments, but it is safe to say that we have no intention of standing still and watching the grass grow under our feet.
With data demands rapidly growing, and hyperscalers' appetites increasingly insatiable, our customers want partners who can match that pace.
If you look at how fast these web-based revenue generating companies are going, they absolutely want the product faster.
9. And CyrusOne data centres now offer customers an IBM Cloud Direct link?
We announced availability of IBM Cloud Direct Link within our Carrollton, Texas data center in January to support customers in the U.S. via CyrusOne’s National IX. This service provides customers with a secured, dedicated network connection from their own IT infrastructure to the IBM Cloud.
It’s also part of our ongoing approach to offer customers flexibility and the option to have an open, hybrid approach to develop, run and deploy applications across multicloud environments, and optimise their strategy, whatever their needs may be.
10. How important is it to have, and further develop, an ecosystem of connectivity and other technology partners?
A robust ecosystem is an important advantage for Enterprise organizations. As an example, our Aurora, Illinois data center campus is the financial industry’s preeminent data center campus in the world. CyrusOne’s Aurora campus is home to an ecosystem of financial service companies, asset managers, and traders all with the knowledge and reassurance that their daily operations are flowing through the nerve center of the financial markets. The combination of major trading platforms, multiple public cloud providers, traditional interconnection options, our new wireless tower, and significant onsite power and compute capacity has created a rapidly growing financial services data center hub in the region.
11. Focusing on data centre design trends, can you tell us a little bit about the importance of data centre location – right now and into the future?
You could look at this in a couple of different ways. If your basic premise is that you want very efficient data centres, then the logic is that you would want to put them somewhere where there is a cold climate and you can limit the amount of cooling. For certain applications like storage, it makes complete sense to have capacity in cooler climates.
In reality however, rules around data protection and latency issues will often play a governing role in determining the location of a data centre. When you look at how the hyperscalers work, they have to grow capacity in close proximity to the nodes from which they serve certain markets. They have specific parameters around this, which we have to adhere to in terms of where we offer capacity.
Our expansion into different territories is closely tied to the demands of our hyperscale customers who require partners capable of matching their growing capacity demands.
12. For example, is there ever a perfect data centre location, or is it always something of a compromise between the availability of land, power and a customer base?
The costs of building a data centre are highly significant, so you have to see a runway to revenue. Everything starts with the customer and their demand. You then have to strike the right balance in order to satisfy that demand by acquiring land and power, as well as building capacity within the parameters of the customers’ timeline requirements.
When a cloud provider enters a market like Frankfurt for example, they want to be able to operate quickly at scale. The overall trend we’re seeing is towards larger data centres.
The economics of building a data centre become far better the more you can continue to build out capacity over time. So the ideal data centre location typically comes down to being able to purchase land in an area where you are able to increase power delivery in order to fulfil medium and long-term requirements.
Enterprise cloud demand growth is fuelling the next phase of hyperscale growth, therefore the route of travel for data centre providers like CyrusOne is to acquire more land in countries where outsourcing by enterprises into the cloud is accelerating. From a European perspective right now, that means acquiring more land in Frankfurt, more in London, and more in North-Western Europe as a whole. Paris is interesting because it has been slower to catch on to outsourcing in the same way as Frankfurt or London. Only now are we starting to see that momentum shift in Paris.
13. In terms of data centre construction, modular seems to be the favourite approach right now?
If you look at the speed at which the big cloud companies and hyperscalers are growing, it’s no underestimation to say that we simply can’t build these things fast enough. Every day, every hour matters, and the quicker we can get the product into the hands of our customers, the more revenue it means for everyone concerned.
We’ve adopted what we call a ‘Massively Modular’ approach to data centre construction, which enables us to commission a data centre in 12-16 weeks. That’s pretty much an industry record.
Somewhere between 70 and 75 percent of the design we keep as standard, with the with the rest left for innovations learned from previous constructions, acquisitions, or customer requirements
14. But modular can mean one large building, with modular rooms built inside, or building one data centre at a time on a large site. Any thoughts on these different approaches?
Typically, what we do is we build one large structure, and then we build out pods within that structure. Then we use the same generators, the same UPS systems, the same air handlers, the same PDUs across our portfolio. Having that rinse and repeat process, it's elegant
15. And we can’t ignore the growing focus on the edge – how will that impact data centre location and design?
Edge computing will bring together data democratization and data gravity. Billions of average end users can now freely access massive amounts of digital data to learn, invent and profit. This will require larger data centers and fast speed-to-market so the world’s leading organizations can keep up with customer demand
Our critical differentiator – CyrusOne can provide large campus-style environments to house hyperscale compute nodes and AI compute nodes, offering the capacity to bring the data into a location that is near the compute.
16. Do you think colos will be able to own everything from large, central data centres through regional facilities down to micro, edge data centres, or will partnering need to happen?
Every region in the world presents different circumstances. Our mission is to surpass the aggressive speed-to-market demands of hyperscale cloud providers, as well as the expanding IT infrastructure requirements of the enterprise. CyrusOne provides the flexibility, reliability, security, and connectivity that foster business growth. Ultimately it is about offering a tailored, customer service-focused platform that adapt to the technologies of today and tomorrow.
17. Any thoughts on future power developments for the data centre – ie lithium ion batteries?
With so much demand coming from cloud companies who tend to use the capacity that the data centres offer, the cost of power has risen in importance
Everyone is looking to reduce the cost of power and have sustainable power sources on tap. Solar is impractical because you would need acres and acres to satisfy demand. Battery power is certainly in the mix, but a viable solution seems a long way off since the costs are prohibitive.
Grid power is becoming an issue in a number of markets, so the case for gas generation is becoming more attractive. There are models which make this approach on a par and even cheaper than what the grid can offer. And if you can have redundancy centralised on a gas generation power station, that means you don’t have to have generators on site.
The problem remains that cost savings are only possible if you have a customer with a very defined and substantial power need.
At some point the hyperscalers are going to need to build out capacity in more de centralised locations in certain countries. There is after all only so much land available in the major conurbations. Certainly they will have to address latency issues to achieve this, but this is not beyond their capabilities. As data centres move to these locations I can see a case for data centres gathering more power through sources like waste-to-energy.
18. Similarly, any thoughts on the cooling technology front – liquid cooling seems to be making plenty of noise right now?
The cloud companies in particular are drawing significant amounts of power at much higher density than anything we’ve seen in the enterprise market. This in turn is fuelling a drive to make data centres significantly more efficient through simpler, more refined cooling system technology that ensures the power utilisation of these buildings does not become cost-prohibitive.
Right now, there is a huge and unsettled debate over the best technologies to deliver this level of efficiency. We deploy indirect air adiabatic cooling in some of our data centres.
Others are choosing to address this challenge through free-air cooling chillers, while others still are using non-water glycol based systems.I’m expecting to see a further refinement of these technologies in 2019 as well as the beginnings of a convergence around a few preferred cooling options.
Water conservation has been a significant factor in how we design and build new data centres. Our data centres in the US use an air-cooled chiller technology with an integrated compressor and condenser that cool the closed loop of water. Filling the pipe with water just a single time is the only water consumption. The chilled water loop for a 4.5 MW data centre is filled once with less than 8,000 gallons, meaning the permanent water supply can be provided by a single tanker truck
19. Higher density data centres, and taller data centres on small(er) footprints – likely to happen?
It’s not a question of whether or not this is going to happen, it is already happening. It’s simple economics that when there is high demand for land and limited supply, you are going to go up. Our Frankfurt II data centre facility is five storeys and contains over 2,200 Watts of capacity per metre squared.
Increasing operating temperature for servers and increased power density requirements reduce the amount of energy required to cool equipment thus making it more energy and cost efficient
20. How do you see intelligent automation impacting on data centre design – ie how does the data centre have to change to cope with the IT hardware required to support AI, IoT etc?
The ability to build faster, cheaper and higher quality than its peers in the data center industry puts CyrusOne in the driver’s seat. Its highly optimized supply chain enables it to get equipment, generators and building materials faster than the competition. Its modular design and tilt-up building process means it can replicate the building process in any geography. Because its optimized supply chain lowers building costs, CyrusOne can pass along these benefits to its customers.
21. And how do you see intelligent automation impacting on data centre operations and management?
Large global cloud players drive AI investment and innovation. Put simply, today’s cloud companies are the biggest investors in AI. Enterprises will move into the best locations to build hybrid-cloud and multi-cloud environments to serve AI’s demands for growth. Cost, complexity and capabilities will accelerate the transition, as enterprises search for power efficiency and scalability. Already today, industries make heavy investments into AI and deploy it in areas with massive data collections. Cyrusone sits at the center of this growth market.
23. We always ask our data centre interviewees – are you a glass half full or half empty character when it comes to power availability into the future?!
As an organisation, we definitely default to a glass half full outlook. And that extends to our ability to source power at appropriate levels to satisfy our growth ambitions.
24. Finally, what can we expect from CyrusOne over the next year or so?
We certainly don’t have any plans to slow down! If you look at where we are in terms of the data centre market, we are really only just beginning to scratch the surface of future demand for capacity.
Just look at the enterprise market in London. – There’s approximately 260 million square feet of office space in central London. Typically, around 5 percent of that office footprint is taken up by servers and IT infrastructure. So that’s 13 million square feet of data center space which will move to the cloud.
When you consider that the London data centre market currently occupies just 500,000 square feet, you start to appreciate why there’s room in the market for a new entrant that can build data centres very quickly.
Digital transformation encompasses so much more than adopting technologies such as cloud and the Internet of Things. It now has the potential to reach across an enterprise organisation to its foundation: IT operations (ITOps). That’s why in 2019, artificial intelligence for IT operations (AIOps) is THE digital evolution for ITOps teams looking to reduce manual efforts, streamline operations, and contribute more strategic innovation to the business.
By Paul Cant, Vice President EMEA, BMC.
As more businesses embrace hybrid cloud environments and adopt leading-edge technologies, ITOps teams can be challenged to keep pace with the complexity and data generated across systems. With AIOps, ITOps becomes a strategic player for the business, freeing up skilled staffers to work on innovative projects while smart software keeps the alerts, logs, and data in check and working for the business.
How digital transformation is improving IT operations
What was once considered just a back-office fundamental now has the opportunity to shine in the business spotlight by shifting IT’s focus from chasing alerts to intelligently solving problems. Essentially, AlOps is the advanced approach to optimising the performance of infrastructure, applications, and services by applying artificial intelligence and machine learning to volumes of data generated from everyday systems across on-premise and cloud environments.
More specifically, AIOps helps IT departments support the innovative digital experiences the business has created for customers and end users by reducing event noise, preventing problems from impacting customers, identifying and solving incidents more quickly, and accurately allocating the infrastructure resources needed to meet business demand. Digital transformation has improved IT operations and will continue to do so for the foreseeable future.
Today’s digital businesses require IT to keep pace with the technology needed to support ever-increasing customer demand—and to do that, enterprise IT organisations must embrace the promise of AIOps to help create systems which simplifies process. It’s important to remember, however, that businesses cannot deliver digital experiences on the front-end without also putting the right tools and processes in place to digitally transform the back-end.
The benefits successfully applying AIOps
Experienced IT experts understand the time and energy spent on root cause analysis, and how long it can take to parse through logs and events to better understand why an issue originally occurred. AIOps can speed the time it takes to identify the source of issues by 60% with event correlation and log analytics capabilities. When an alert occurs, AIOps will rank the events by their relationship to the initial alert, the timeline in question, and any anomalies captured by behavioural learning. By applying advanced analysis to operational metrics across infrastructure and applications, AIOps will focus on the true problem, saving IT teams time and energy that could be spent better elsewhere.
If IT is to support today’s digital business, it must understand resource consumption on premise and in the cloud. Capacity management can be considered an art that only the uniquely talented can master successfully. Yet with AIOps, this changes things. With the collected data, AIOps can use its behavioural learning, advanced analytics, and more to understand what resources are being used and when—and perhaps more importantly, what resources will be needed to support the apps and services most in demand by customers.
By using the intelligence embedded in AIOps, IT can more easily plan for future needs using correlation analysis between business drivers and resource utilisation metrics. The insights can also enable IT to allocate and schedule the resources needed to support new apps. IT can gain the intelligence to right-size resources, keeping costs down and applications performing well.
AIOps will enable this digital evolution of ITOps teams from at times being at the mercy of a complex distributed environment to intelligently orchestrating infrastructure, applications, and services across hybrid cloud environments to align with the business and address customer needs on demand.
Savvy CIOs today recognise that they must apply AIOps digitally transform their entire IT environment to support a smart enterprise ready to meet the needs of a more demanding than ever digital market.
Gartner predicts that 25 billion connected things will be in use by 2021. This presents a huge opportunity for businesses to harness data, optimise their operations and deliver more personalised experiences to users.
By Martin Ewings, Director of Specialist Markets, Experis.
However, this new technology also opens up more vulnerabilities to potential cyber threats. According to Deloitte, 40% of professionals believe the greatest cyber security challenge their organisation faces is managing increasing amounts of data and connected devices. As a result, businesses need to be able to capitalise on the promise of the Internet of Things (IoT) without exposing themselves to dangers.
However, UK businesses are struggling to do this, due to the crippling cyber security skills shortage. This is according to our Industry Insiders report, which examines how the growth of IoT is impacting the cyber security jobs market.
Increased demand for cyber security skills
Our research shows that there were 13,214 cyber security roles advertised in Q4 2018 – up 10% year-on-year and 16.6% from the previous quarter. Average permanent salaries for cyber security specialists dipped 2% year-on-year to £58,557; yet the picture was much better for contractors, with average day rates climbing 19.6% from the previous year, to an average of £505. This suggests that businesses are prioritising short term fixes via contractors, over long-term solutions to their talent needs.
Demand for IoT talent set to soar
IoT is currently a much smaller jobs market, but demand for these roles rose 48.8% to 4,968 in Q4 2018 – up from 3,338 in the previous quarter. Both permanent salaries and contractor day rates increased year-on-year as well, by 1.5% and 4% respectively. The demand for IoT technology skills is building, and could be set to soar, as business demand for these technologies grows over the next few years. With the cyber security talent pool already stretched, businesses will need to explore creative solutions to attracting both skillsets, if they are to harness the power of IoT technology securely.
Job roles most needed
When it comes to the type of roles required, an analysis of job titles reveals that it is front line workers that are most in demand. In cyber security, there are more open vacancies for security engineers, consultants, architects and analysts than any other position. For IoT-related job postings, software engineers, technical architects, managers and testers are most sought after. This highlights how much emphasis is being put on actually being able to build and analyse, in both areas of technology.
For employers looking to be ready to adopt IoT technologies and ensure that they have the right blend of skills in their business, here are five tips to consider:1. Get ahead of the curve
Whether it’s implementing IoT technologies or shoring up security, it’s imperative that organisations move quickly and decisively to get the skills they need in place. Hackers will exploit loopholes in the blink of an eye, while the market will leave dilatory businesses behind. It’s an unenviable position to be in, but having a flexible approach to acquiring the skills required will help organisations reach their desired state, faster.2. Be creative in your approach to talent acquisition
Building teams with a mix of contractors and permanent staff will ensure businesses get the continuity they need long term, with the injection of experience and skills required immediately. The gig economy may be a relatively new term, but in IT contracting, it has a long tradition. Organisations would be foolish to miss out on an efficient way of accessing talent quickly, if they ensure they are IR35 compliant. They can also use contractors to ‘build’ talent, by mentoring, upskilling and cross-skilling permanent members of staff who are less knowledgeable in this area.3. Offer continuous training and development
Internal development has long been a more cost-effective way of adding new skills than bringing in external candidates. While new staff are important to ensure teams do not become stagnant, the rate of change in cyber security and IoT makes it vital that teams have access to continual training opportunities to keep them up to date.4. Consider offsite
To expand their pool of candidates, organisations should consider remote workers, both for contract and permanent roles. Our research found that, unsurprisingly, salaries and day rates for both cyber security and IoT professionals were generally lower in the rest of the UK than in London. In the latter, the gap was £9,000, while in cyber security, that difference rose to almost £15,000. For organisations needing to upskill, considering off site and remote workers could represent a more economical way of acquiring talent without sacrificing quality.5. Trust the professionals
Despite the obvious threats of cyber security, in some organisations it could be argued that there is a lack of understanding of the severity of what they could potentially face. While it is imperative that budgets are set and processes followed, senior management teams need to delegate the delivery of specialist functions, such as cyber security, to those with the knowledge and expertise to do so. Having invested significantly in these leadership roles, CISOs and their peers must be given the support and space to deliver businesses’ security strategy.
IoT offers huge opportunities for organisations – but only if they have the right cyber security foundations in place to take advantage of new innovations safely. By hiring the right talent, businesses will be better placed to fully protect their operations from malicious attacks. To do that, they need to have a broad perspective on how, who and which skills to hire, and how they can develop their existing staff to meet their current and future demands. By taking a blended approach to talent acquisition – tapping into the contractor market to build a hybrid team of permanent and temporary workers – organisations will be able to better protect themselves against cyber threats without hampering their IoT potential.
The August issue of Digitalisation World includes a major focus on Software-Defined Networks (SDN). How has the technology developed to date ? Are all SDN solutions the same ? What kind of uptake is there amongst end users ? And, how important is SDN as part of an overall digital transformation strategy?
The impact of SD-WAN adoption on digital transformation and corporate strategy
By Todd Kiehn, VP of product management at GTT.
Since the advent of the Internet, IT teams have used many different network management techniques in search of a way to achieve optimal use of their bandwidth and efficient data traffic routing. Truly dynamic approaches have proved elusive. The latest generation wide area networking (WAN) technology, SD-WAN, which is software-based, revolutionises the WAN just as virtualisation has revolutionised the server. The stars seem well aligned to accelerate the enterprise adoption of SD-WAN. Organisations are motivated by the need for digital transformation, the explosion of application traffic to the cloud and the need for homogeneous network performance at all sites.
SD-WAN has already been in use by the early adopters for several years, and it is swiftly moving to become a mainstream solution. Deployment rates are growing and accelerating. In a recent study by Frost & Sullivan,1 61% of the companies surveyed said they plan to replace their routers within one to two years with SD-WAN. The annual growth of the SD-WAN market is estimated at 40%, according to IDC.2 Though it is still in the maturation phase, SD-WAN offerings have already evolved well in adapting to new market expectations. So, is 2019 the year of SD-WAN?
Beyond Simply Reducing Costs
SD-WAN’s explosive growth is in part due to its ability to enable the use of a combination of different access types at a site to deliver greater uptime and more available bandwidth for less money than legacy network services such as MPLS.
SD-WAN can potentially generate triple-digit ROI on accelerated payback intervals, contributing cost savings in the form of reduced network downtime, optimised network access costs and resource savings from a managed service. It’s worth taking a closer look at SD-WAN’s advantages beyond cost reductions and the implications for enterprise IT teams. The deployment of SD-WAN technology should also be considered for the benefits it can bring for applications that are sensitive to speed, latency and responsiveness.
The revolution SD-WAN brings can be summed up in its ability to identify the nature of the data flows, to prioritise them and to deliver them on any site of the company with the same homogeneity of performance — all orchestrated with a unique management interface. The IT department can use SD-WAN to arbitrate much more seamlessly between the decentralisation and/or globalisation of its enterprise footprint and its desire to have traffic managed through a central monitoring setup.
Legacy WANs had the advantage of being simple to administer and orchestrate. The SD-WAN layer, while providing agility, efficiency and better performance, also brings some complexity, requiring new skills for the teams responsible for operating the network.
Take, for instance, the security of data streams and direct access to services in the cloud from remote sites, which require a change in network architecture, mesh and route selection. Security must be thought about from the outset of the decision to adopt SD-WAN, and it must be integrated into deployments, cloud access, virtual instances, container services and beyond.
With network virtualisation, network monitoring has also become increasingly difficult, requiring IT teams to go beyond equipment and connections, and study applications and network topologies in different layers. This is where global network operators have an advantage over their non-operator competitors, as they can easily provide end-to-end monitoring from their supervisory portal.
Multiple Actors From Different Horizons
This change is driving IT teams to abandon a do-it-yourself approach and to rely instead on outsourcing SD-WAN services. According to Frost & Sullivan, 80% of enterprises are choosing managed SD-WAN services, compared to 20% doing it themselves.3
Several types of providers offer SD-WAN, responding to the same problem with different approaches. Four general categories for SD-WAN service provider approaches are:
Each approach corresponds to different needs depending on the level of expertise of the enterprise’s IT teams.
The Gartner Magic Quadrant for WAN Edge Infrastructure of October 2018 lists 20 SD-WAN vendors. While this list is a good start, we can add at least as many other companies that offer SD-WAN integrated with other services — for instance, Unified Communications as a Service. Factoring in the emergence of new kinds of IT outsourcing and security companies, we are currently in a highly atomised SD-WAN market.
To choose the best service, a decision-maker has to pay attention to requirements for cloud connectivity, path performance, geographic reach and the service provider’s experience. Does the SD-WAN solution have direct connections to the cloud applications that the business uses? How exactly does the traffic get from point to point? Does the provider’s reach match the business’s footprint? Do the service providers have experience managing complex hybrid networks?
From a macro perspective, SD-WAN is well positioned to become a strategic enabler for companies.
CIOs are now part of the core executive decision-making hierarchy of a company, driven by digital transformation projects. According to Deloitte, 46% of large companies’ CIOs report directly to their CEOs.4 With the explosion of mobile tools and applications, companies need to reorganise. They must deploy the same application performance to all employees to enable them to work on any site using real-time, seamless collaboration tools. To do this, SD-WAN provides an agile solution.
Indeed, thanks to the centralised orchestration of the SD-WAN performance at all sites, it is possible to adjust the level of performance of the connectivity according to the workload, and to aggregate the different types of access (fiber, copper, Wi-Fi, 4G and beyond) site by site.
Thus, this new technology not only impacts IT, but also impacts corporate strategy at the executive committee level. For example, SD-WAN could enable a new organisational structure and geographical team distribution, resulting in real estate savings. SD-WAN can allow easier M&A,5 faster rollout of new offices and branches, more reliable broadband in shops to facilitate more digitally enabled and immersive shopping experiences, and so on.
While enterprise networks are a complex ecosystem, made up of disparate connections (Internet, Ethernet, etc.), operators have the necessary information to support companies in their digital transformation with SD-WAN solutions. Expertise with edge applications like 5G and IoT can also offer them competitive and technical advantages.
The virtualisation of network technology that SD-WAN represents promises further upheaval to come. It used to be the norm for router equipment manufacturers to propose proprietary solutions. Over time, the telecommunications industry has evolved toward interoperability of equipment, in the hunt for increasingly meshed networks, reduced costs and increased capacity.
SD-WAN has so far enjoyed great success being delivered from proprietary vendor hardware. However, the next delivery model for SD-WAN, known as Universal Customer Premises Equipment (uCPE), breaks the link between hardware and software in an SD-WAN context. As these solutions become more capable and widespread, enterprises will be able to enjoy exceptional value and additional cost savings, thanks to increased flexibility for adding WAN optimisation, firewalls and more routing capabilities on a single hardware device.
We are only on the cusp of the SD-WAN revolution as the driver of network transformation for the cloud IT model underpinning the enterprise of the future.
Every DevOps team wants to do more with less, but sometimes technology, company culture, or both, can get in the way.
By Jason Lenny, Director of Product, CI/CD, GitLab.
With careful planning it is possible to improve the cost efficiency of DevOps, no matter where your organization is on the journey. Here are four key aspects of DevOps that can be done more frugally by thinking outside of the box.
Tackle cloud sprawl
DevOps teams love the cloud and with good reason. It’s simple to spin up instances, tap into limitless storage, and operate efficiently without costly on-premises investments. But in some ways it’s a little too easy to use the cloud, because without a firm grip on cloud usage, sprawl – and cost overruns – can happen in the blink of an eye. Monitoring all cloud activity – including software as a service (SaaS) accounts that can easily be set up and then never used – may help companies rein in costs. Automation can also come in handy: health technology company Substrakt Health set up an autoscaling CI/CD cluster that saved the company up to 90% on costs while running EC2 spot instances on Amazon Web Services.
The fewer tools the better
With so many DevOps tool choices available, it’s no wonder companies find themselves practically drowning in options. This can be a source of serious project cost overruns. Not only is there an upfront investment cost, but the tools have to be upgraded and maintained. Specialized tools can require hiring and retaining equally specialized developers, which also adds to the price tag. If ever there was a case for less is more it’s true for the toolchain.
That was certainly the experience at Goldman Sachs. The investment firm had created its own toolchain but wanted faster concurrent development than was possible with its homegrown solution. Settling on a single toolchain with end-to-end capability allowed the company to improve a key release cycle from once every 1-2 weeks to once every few minutes. The previous solution let the Goldman Sachs DevOps teams manage just two CI feature branch builds a day; now some teams are managing over 1,000.
Modernize legacy systems
While some organizations are fortunate enough to approach DevOps with a clean slate, the majority are forced to bring their legacy systems along for the ride. Not only is the process of converting legacy systems to a more streamlined DevOps approach time consuming, it’s also expensive and, in many cases, a distraction from the bigger goal of releasing quality software more quickly. Legacy systems also thwart modernization efforts simply because they’re often so large. With millions of lines of code it can be difficult if not impossible to break away from costly (and slow) manual testing. So what’s a company to do with its legacy systems?
Ask Media Group tackled this challenge with a thoughtful and deliberate approach that involved not just different technology choices but cultural changes and new ways to approach work. As a result, the company has been able to shift development from monoliths to microservices by moving to a container-based architecture.
Make it push button
Automation is perhaps the one area in DevOps where the costs of not doing it outweigh the (very real) costs of implementing it in the first place. That is particularly true when it comes to test automation.
GitLab’s recent 2019 Global Developer Report: DevSecOps found developers, security professionals, and operations teams agree by a large margin that software testing is the single factor most likely to hold up development.
Automation is the answer, and Free Code Camp, which built a CI testing pipeline to catch more bugs, offered insight on why it works. The company turned to automation to eliminate repetitive manual jobs, speed up the process, and actually control the testing process. The initial investment in time and resources more than paid for itself in results.
So it is possible to make progress with DevOps while keeping the costs down. Creativity and attention to detail will go a long way in any organization.
Many companies are well in the throes of scoping out their robotic process automation (RPA) projects. Some have even designed their software robots and are ready to deploy. So, all the work is done, right?
By Bryant Bell, Director of Product Marketing, Kofax.
The reality is: It’s just begun. Even after your army of super-productive software robots is up and running, it still needs guidance, management and maintenance. And that requires a strong governance and version control programme.
Without governance, robots will break, operate over each other and basically run amok. Why? Change is inevitable. Legacy systems and websites evolve; applications get updated; passwords expire; spreadsheets are modified and security patches are applied. Whenever such changes occur, robots don’t know what to do.
All of this becomes increasingly important at scale. When a robot breaks down, if you don’t have governance, it’s almost impossible to track which one among hundreds or thousands is causing an issue or where one may have crashed into another and stopped a process in its tracks.
As EY points out in its report, “Getting Ready for Robots,” it’s not easy getting RPA right. “We have seen as many as 30 percent to 50 percent of initial RPA projects fail. This isn’t a reflection of the technology; there are many successful deployments,” the consulting company says, “But there are some common mistakes that will often prevent an organisation from delivering on the promise of RPA.”
Lack of governance is one of the most common.
From a technical point of view, governance is easy to accomplish when RPA tools include management and oversight capabilities like version control, or more sophisticated capabilities such as Robot Lifecycle Management.
But getting governance right also requires organisations to first define guidelines and rules. In fact, according to Forrester Research, the top reasons organisations struggle to deploy RPA are performance and scalability, followed by managing rules of robot behaviour and controlling and operating RPA robots in a mature fashion.
To maximise RPA benefits, chief operating officers (COOs) and chief information officers (CIOs) should collaborate to lay the governance groundwork for how RPA robots will interact, access and disseminate data and content.
But how can this be achieved when RPA robots are deployed in a decentralised fashion?
A best practice is to put a guiding framework in place. While operations will naturally take the lead in designing such a model, they must work closely with IT on strategic initiatives to avoid duplicating automation efforts.
An effective framework requires three elements:
1. An enterprise robotics council to define scope, spearhead the program and set targets for tracking execution efficiency and outcomes
2. A business unit governance council that will prioritise RPA projects across departments and business units
3. An RPA technical council – also known as the Centre of Excellence (CoE) – that designs standards, formulates working principles and guidelines and compiles best practices
To accomplish the first and third elements, the governance team needs certain tools embedded in the RPA platform, including robot analytics, performance tools, version controls and security. For example, a platform that integrates Robot Lifecycle Management helps teams manage enterprise RPA robot deployments from hundreds to thousands of RPA robots. With it, teams can more easily keep track of changes, compare files and explore changes made. It also makes it easy to store backup files so that if companies must revert to a previous working version, they can do so easily.
Meanwhile, the CoE plays a critical role as it acts as “single source of truth” and keeps the organisation focused on the long-term goals. It represents a group of core resources and people that guide all things related to automation, including maintaining and overseeing standards across the business; training; management of vendors; establishing best practices and so much more.
Members of this team come from across the organisation. IT is a core member, but stakeholders from each business unit bring subject matter expertise. With this collection of intelligence, the team is better equipped not only to make decisions on the right RPA tools to deploy but also which processes are prime candidates for automation.
Organisations also have flexibility in terms of how such a programme can be structured. There are three main models, each differing in how responsibilities are shared across the enterprise.
Centralised operating model: A single team is responsible for running and controlling all aspects of the programme.
Decentralised operating model: The responsibilities for running the automation programme are replicated across separate business units within the organisation.
Hybrid operating model: Some aspects of the automation programme are run by a single, centralised team, while others are replicated across business units.
Of course, there’s no ‘right’ or ‘wrong’ answer. Organisations and financial teams should start with the model that makes the most sense given where they are today and adjust down the road as needed.
By leveraging a Centre of Excellence along with sophisticated tools like Robot Lifecycle Management, organisations can take a more business-centric approach to RPA that goes far beyond simple task automation.
A strong governance programme is a key RPA success indicator. When it’s supported by robot lifecycle management and digital workforce analytics, you can better control your robots and reduce the chance they will run amok.
The August issue of Digitalisation World includes a major focus on Software-Defined Networks (SDN). How has the technology developed to date ? Are all SDN solutions the same ? What kind of uptake is there amongst end users ? And, how important is SDN as part of an overall digital transformation strategy?
EMEA businesses turn to SD-WAN in spite of lack of education
Businesses in EMEA are significantly increasing their investment in cloud-friendly networking technologies, despite a lack of education and skill, according to the findings of a study of 410 IT and networking professionals released today by Barracuda Networks, the leader in cloud-enabled security and data protection solutions.
Cyber attack defence is EMEA organisations’ biggest priority
Perhaps unsurprisingly for a region where the financial and reputational cost of a data breach has never been higher, reducing the risk of cyber attacks is the main focus for organisations over the next 12 months, with over half (52%) choosing it as their top priority. This surpasses projects leveraging artificial intelligence (AI), machine learning (ML), automation and the Internet of Things (IoT), proving that businesses realise the importance of getting security right before embarking on more innovation-led development.
Security also came out on top as the biggest challenge when using cloud applications, with just under half (47%) choosing it as their number one issue. However, the biggest wide area network (WAN) challenge was performance to the cloud, with four in ten (42%) voting it as their top issue.
EMEA overwhelmingly turns to SD-WAN to reach the cloud securely and quickly
The findings suggest that EMEA organisations are taking advantage of SD-WAN to solve these latency and security issues across their cloud and WAN environments. One in four (26%) have SD-WAN deployed, with a further 64% either deploying or considering deploying it in the future, leaving just 7% with no SD-WAN plans at all.
The biggest motivation for this is improving application performance between locations (chosen by 17%), closely followed by reducing security risks at remote locations (chosen by 16%). The IT C-Suite (CIO, CTO, CISO etc) is driving more than a quarter (28%) of EMEA deployments. This diverges from the rest of the world, where the IT networking team drives projects in 29% of cases. Decisions seem to be made higher up the chain in EMEA, as the CEO drives 8% of SD-WAN deployments, ahead of the US (5%) and APAC (7%).
An overwhelming 98% have benefited from SD-WAN, with increased network security (chosen by 46%) identified as the number one benefit. Much of the benefit is also financial: respondents estimated they could save an average of $1,312,311 (USD) on MPLS and networking costs in one year using SD-WAN. The vast majority of SD-WAN deployments are successful, with eight in ten (82%) agreeing their SD-WAN solution has lived up to their expectations.
EMEA has the least SD-WAN education
Despite its success, SD-WAN education in the EMEA region leaves a lot to be desired. Less than a third (32.7%) felt that they totally understood SD-WAN, falling far behind the US (57%) and APAC (41%). Whilst this may be more to do with hubris than reality, it’s seemingly leading to a lack of internal skill and understanding to deploy SD-WAN, which is highlighted by more than a third (34%) of EMEA respondents as the main issue following its deployment. For those who haven’t yet deployed the technology, a lack of internal skill is cited by three in ten (30%) as their biggest barrier, with a quarter (25%) not knowing enough about SD-WAN.
The future looks bright but myths persist in EMEA
Despite a perceived lack of education, the EMEA region is optimistic about SD-WAN. Four in ten (43%) think it will replace MPLS as the leading solution, whilst almost two thirds (63%) believe their WAN will become outdated if they do not embrace SD-WAN, meaning they will fall behind competitors.
Perhaps in part due to the lack of SD-WAN education, over half (53%) think SD-WAN is a buzzword and won’t revolutionise networking, higher than the US (42%) and APAC (51%). Even though most SD-WAN solutions out there need additional security solutions to make them secure, half (50%) believe the myth that their SD-WAN solution contains everything they need to keep their network secure.
One in four (25%) of EMEA respondents also think the security of SD-WAN is worse than the combination of a corporate firewall and virtual private network (VPN), much higher than in the US (7%) and APAC (9%). While SD-WAN does have different security needs than traditional WAN, it can be just as secure.
The answer seemingly lies in training and recruitment
While many respondents admitted to a lack of education and skills around SD-WAN, they also suggested that the answer lies in training. Almost two thirds (64%) believe there’s currently not enough SD-WAN training in their organisation. For those who are deploying SD-WAN or considering it, just under half (48%) will train existing staff and 46% will hire new staff with specialist skills. In terms of what they look for in an SD-WAN solution, nine in ten (91%) think technical support and consultancy is important.
What does this all mean?
Well, for starters, this research clearly shows that the new European data regulation has definitely helped organisations in EMEA wake up to the reality of cyber threats, with many taking the plunge into SD-WAN as a result. It’s comforting to see that for many organisations, cyber security has become not only the number one focus for IT teams, but has also risen to a CEO level issue.
However we’ve clearly got a long way to go in terms of SD-WAN education – especially in EMEA as we have the least knowledge. It’s not surprising that against this backdrop of a lack of SD-WAN skills and education, some of the myths are prevalent. IT teams across the region seem to be crying out for more SD-WAN training. This is a wake-up call for the industry: we need to work together to better educate organisations around SD-WAN. Until we do, confusion will reign.
Today demand for video conferencing is booming. According to a recent study from Frost & Sullivan the global conferencing services market is expected to grow gradually and reach $11 billion by 2023, due to the accelerated usage of conferencing via cloud web and video, mobility and rich analytics.
By Anne Marie Ginn, Head of Video Collaboration, EMEA at Logitech.
What’s more, the same report predicts that artificial intelligence (AI) will play an important role in driving this growth by creating video conferencing applications that deliver more natural, contextual, and relevant meeting experiences.
This data suggests that while demand for video collaboration is growing, traditional video conferencing is quickly becoming a thing of the past.
Gone are the days when video conferencing was only used in large boardrooms for formal presentations and meetings. Today’s businesses are shifting their focus to create a flexible working environment. With half of the UK workforce expected to work remotely by 2020, video enabled communication will become more important than ever for ensuring an organisation can continue to operate productively. It’s not surprising then that a growing number of businesses are investing in huddle rooms and multi-purpose breakout areas that are equipped with video devices.
However, these steps are just the beginning of enhancing productivity, and as office spaces continue to evolve, so is collaboration technology. In this article I’ll explore the next stage of innovation in video conferencing, driven by the adoption of AI
How AI is transforming video conferencing
Advancements in AI can dramatically improve the user experience and drive efficiencies by helping automate time-consuming collaboration tasks. In fact, AI can drive innovation in three key areas: meeting room analytics, natural language processing, and computer vision.
Analysing meeting room use to drive productivity
AI-driven devices can make conference rooms smarter and meetings run more efficiently by analysing factors that you may never have even considered. For example, AI can automate tasks such as rescheduling calls and rebooking meeting rooms based on conversations over email or sending important notifications about which rooms are available and which are busy. AI could even suggest which resources or documents you may need to bring to a meeting ahead of time or enable screen sharing within and across rooms.
The data insights generated from AI can also be a valuable tool for making the most of office space and helping facilities management teams to operate efficiently. For example, AI can help businesses to understand how meeting rooms are being used – how many people on average use each of them, for how long, at what times of the day and if meetings typically overrun. As a result, occupancy levels could be better managed, making sure the right room is free for the right type of meeting.
The power of natural language processing
Natural language processing (NLP) is already being used to improve video conferencing applications in multiple ways. For example, it can facilitate the automatic transcription of meetings, sharing actions and notes, and even the translation of the conversations into different languages. When combined with AI capabilities, NLP can also enable chatbots or virtual assistants to start, join and leave meetings by using a voice command feature.
And these aren’t the only uses for natural language processing. It can also support better audio quality, as it can automatically suppress echo and minimise background noise. For example, if someone was rustling paper or receiving noisy notifications on their laptop during the meeting, the technology is intelligent enough to muffle this background noise. This, coupled with evolving techniques such as automatic levelling and beamforming, will make everyone in the meeting easy to hear and understand.
Computer vision: making the most of each video interaction
Ultimately, none of these useful advancements would be possible without the development of computer vision in video conferencing. Computer vision frames meeting participants and then automatically adjusts the zoom to deliver a better video experience for those on the far end. When the number of participants changes or people move to a different part of the room, the camera control allows to automatically tilt, pan and zoom to centre and frame participants. This could be particularly useful for more interactive meetings, for instance if someone stood up to draw a diagram on a flip chart.
More advanced functions of computer vision include gaze correction and controlling the background environment in the room to improve user experience. For instance, when combined with AI capabilities, computer vision can support colour and light correction by automatically detecting potential issues and amending them by emphasising faces even in dim or backlit rooms.
The bottom line: AI is poised to deliver smarter meetings
AI has the potential to transform video conferencing and enable a much better user experience and more productive video meetings. However, for all this to happen, the industry needs to ensure interoperability between video conferencing devices in a multi-vendor AI environment. All players in the marketplace will need to work together to make sure that new advancements in AI and computing vision are designed with the user’s privacy in mind. Only by improving the reliability of emerging video conferencing technologies and building trust in the ecosystem, will we be able to drive mainstream adoption of AI-based video collaboration. This will be key for ensuring all industry players can reap the productivity benefits of this technology.
Advancements in technologies such as machine learning and Artificial Intelligence (AI) are growing in the workplace – to the benefit of employees who are able to work more efficiently and intelligently than ever before. Dr Davide Zilli, Director of Client Services at Mind Foundry, explains how these tools are helping reduce time-consuming tasks, freeing up human expertise to be used where it is most effective.
Machine learning is designed to take over the manual and repetitive processes, enabling employees to focus on the more rewarding results of model building and analysing data. This is exactly where the right tools can help staff members focus on what they really want to do – getting their teeth into meaningful data.
First choose your right machine learning model
Finding the right machine learning model can take a significant amount of time. Employees involved in data science currently dedicate the majority of their workload to cleaning and preparing data, potentially taking up 80 per cent of their time. The process typically requires trying out several different options, often depending on the knowledge and preferences of each individual data scientist. But with automated machine learning tools it is possible to leave that selection process to the machine. Machine learning tools not only do the job better, they do it in a fraction of the time and also make it possible to analyse far more dimensions of data than ever before.
Unlocking patterns from an overwhelming amount of data
Although humans are usually very good at spotting patterns, this ability can only stretch so far when there are multiple variables in play. Machine learning has the capability to work with hundreds of dimensions. This is where it really becomes clear to see relationships in data that human intelligence has no chance of understanding, nor can it extract meaningful insights.
Take looking at a graph to explore your data as an example. In two dimensions everything makes sense. In three it gets complex, but how do we look at data that has 200 dimensions?
Let’s look at a project where we used machine learning to detect the spread of malaria. In a collaboration known as HumBug between Royal Botanic Gardens Kew and the University of Oxford, we developed a real-time detection system based on a phone app that aims to detect malaria-carrying species of mosquito by identifying the sound they make. This data is combined with information on associated environmental factors drawn from high-resolution remote imaging – such as the composition of local vegetation, the distance to water and beyond.
Getting a global picture fast
This results in a huge volume of data, but by harnessing machine learning, all this data can be analysed and used by researchers to produce detailed, real-time maps of the spread of malaria. Charity workers and pharma companies alike can use this data to start to co-ordinate and target key malaria control programmes in a way they would never have been able to do before.
This is one of the key reasons we find that so many people are intrigued to learn more about how the machine interprets data and draws patterns from it. We make the processes as clear as we can, exposing all the parameters, to the benefit of users who can then see how decisions are made. As a spin-off from the University of Oxford, we’re fortunate to have access to a plethora of cutting-edge research, which we always include in our training programmes.
AI and machine learning: An employee’s best friend
AI and machine learning are quite simply making the working lives of people more satisfying, rather than taking jobs away from people. In a recent survey by the Chartered Institute of Personnel and Development (CIPD) and PA Consulting, 43% of workers at companies using AI felt they were learning new things, while a third said they were undertaking more interesting tasks. Similarly, a study this spring by the Japan Science and Technology Agency (JST) revealed a clear trend: as new information technologies such as AI are increasingly adopted, the greater the rise in employee job satisfaction.
This really should come as no surprise. When AI is acting as a hugely helpful hand that allows you to focus on more insightful work, why wouldn’t you feel satisfied?
Thinking out of the box made easy
These advanced, highly accessible tools are making the role of the data scientist far more sophisticated than before. Once people understand the potential of machine learning, they come up with novel applications – and these don’t have to be work related!
One Mind Foundry client has collected over a decade’s worth of data on the performance of his children’s football teams, but to date has only analysed this manually. Now he is exploring using the Mind Foundry platform to gain predictive insights to help improve their performance. Although analysing children’s football results won’t necessarily boost one’s employment prospects, the skills gained from these new tools certainly can.
Consider key questions
Even the smallest of organisations can benefit from machine learning and new cloud-based applications make it easy for them to take advantage of these new technologies. The key is to consider whether staff are spending valuable time on manual, repetitive tasks, rather than concentrating on what they do best. If so, machine learning can help ease this burden, but it will also do the job faster, with more precision and enable employees to use the data to make more informed decisions and unlock new ideas. Decision-makers must ultimately weigh up how much more employees could achieve if they are able to focus on more strategic thinking that drives their business forward.
The August issue of Digitalisation World includes a major focus on Software-Defined Networks (SDN). How has the technology developed to date ? Are all SDN solutions the same ? What kind of uptake is there amongst end users ? And, how important is SDN as part of an overall digital transformation strategy?
SD-WAN and how it is driving digital transformation
By Mr Inchen Lin, AVP of Zyxel’s SD-WAN Business Center.
The industry is currently undergoing a paradigm shift in the WAN landscape, as businesses seek to embrace the digital transformation and make their processes more seamless. Cloud and Software as a Service (SaaS) applications are also rising with this, which, in turn, is increasing mobility and driving growth in expanding branches. Coupled with this is consumer demand for access to data at anytime, anywhere and as a result of this, networks must evolve with technology to meet end-users’ needs, in real-time.
In light of this, businesses are seeking ways to revolutionize the WAN so that they can begin leveraging the public cloud, whilst running their daily routines based on traditional premise networks. This is paving the way for new and emerging technologies, such as Software-Defined Wide Area Networking (SD-WAN), which can be complimentary to legacy network services such as Multiprotocol Label Switching (MPLS). By leveraging existing MPLS combined with other types of connections such as LTE, SD-WAN has the ability to optimize overall traffic, meaning customers no longer have to buy more expensive MPLS bandwidth and can still enjoy better performance and extra managerial benefits.
While MPLS may be renowned for delivering excellent quality of service and avoiding packet loss, it does have notable downsides in the form of bandwidth cost and no built-in data protection whereas, SD-WAN does not.
SD-WAN is future-proof, a cost-efficient option for small scale companies and offers the agility that service providers have come to demand. It enables different forms of adoption depending on individual needs and can use a combination of different access types to deliver greater uptime. Whilst, MPLS will always be in demand for organizations with certain connectivity demands, SD-WAN offers improved visibility, scalability and control.
For Small-to-Medium Businesses (SMBs), they crave faster connectivity, reliability and WAN performance improvement as business users have growing demands for cloud-based platforms to run daily business operations. However, how and when an organization adopts new technologies depends on its existing infrastructure and some businesses can be hesitant to make this change. For a long time, SMBs and small service providers have been held back from undergoing their digital transformation and have previously been priced out of the market in regards to SD-WAN. But thankfully due to new offerings in the market – such as Zyxel’s Nebula SD-WAN offering - viable and cost-effective solutions are now available which can enable businesses to reap the benefits of better bandwidth, simpler management and the possibility to expand their current service offering at an affordable cost.
Compared to a non-WAN-optimized solution, there is nearly twenty-fold increase in speed for the Nebula SD-WAN and it has Dynamic Path Selection feature which performs dynamic adjustment for each packet’s path, aggregating all available bandwidth to maximize transmission speeds. This helps to effectively reduce bandwidth costs without performance degradation. By significantly enhancing the speed in which businesses can transfer data, it also reduces IT efforts and the need for multiple applications and business users will see significant cost reductions. According to a Zyxel Survey, for businesses adopting Nebula SD-WAN to replace existing MPLS networks can see a total cost reduction of 82 percent over two branches and 88 percent for over 20 branches.
A solution such as this can provide uncompromised data protection against cyber threats with Application Patrol, Content Filtering and Geo Enforcer and helps keep a business’ data safe and secure. Zero Touch Provisioning also improves installation times and enables deployments to be more agile and cost-effective.
Most importantly, SD-WAN architecture works with organizations’ existing infrastructure and enables different forms of adoption based on individual needs, meaning that organizations can reap the benefits of greater digitization, cloud and mobility across their networks – enabling them to expand their services, cost-effectively and efficiently.
It’s one year since the launch of the Local Digital Declaration, a rallying call for local government organisations to build better public services by identifying and pursuing digitisation projects. There are encouraging signs that momentum is building, with 145 local councils signing the declaration and specifying digital projects and the public benefits they will deliver.
By Nick Pike, VP UK&I, OutSystems.
Appetite for delivering digital public services is growing in tandem with increasing interest in using IoT-collected municipal data to create smart communities. This might include traffic, parking and public transport information, pollution levels and waste management data; the applications are endless. By collecting and analysing this data, publishing and integrating it with digital public service applications, local government organisations can improve citizens’ quality of life and make their area a better place to live and work.
This momentum around data-driven digital services and smart city data means it’s an exciting time to be working in the local government space, but that’s not to say there aren’t challenges, too.
The conundrums facing Chief Digital Officers
If the Local Digital Declaration represents the will to push out the local government digital footprint, it’s their Chief Digital Officers (CDOs) that must find the way. A recent roundtable discussion undertaken by Local Government Digital highlighted the issues that CDOs face in driving the digital agenda.
One of the key issues CDOs face is the tension between “keeping the lights on”, by continuing to support legacy technology, and acting as a visionary to design and trial new digital services. Limited budgets have to be shared between maintaining existing systems and developing new approaches. This can block investment in new technology platforms. Linked to this, CDOs also face a cultural hurdle. There’s a prevailing assumption that they head an operational division primarily focused on fixing laptops and network issues, rather than rolling out strategically important, transformative services.
Political tensions are at work, too. CDOs report that the performance of existing organisational technology can act as a brake on discussions around new technology. Until existing technology works smoothly, stakeholders can be cynical - unwilling to explore future developments. This creates a chicken and egg situation where CDOs need to provide robust, working examples of successful solutions quickly and at low cost, before they can unlock the door to further strategic discussion and get a mandate for their digital agenda.
Cost uncertainty and future planning
On top of political and cultural challenges are the issues of cost control and vendor lock-in. This isn’t a new phenomenon, of course. Over decades of IT investment many local authorities have found themselves tied to suppliers and technologies that no longer deliver the right IT environment. Understandably, many are determined to avoid falling into that trap again.
Another problem for Local Authorities is uncertainty over the long-term costs of technology solutions that enable digital transformation. While day one purchase costs may be obvious, there is often little clarity over the costs of a long-term commitment. Pricing may be linked to the number of application objects developed, which can be difficult to accurately define. Similarly, while a Local Authority may have a fixed roadmap of the apps it needs to develop over the next six months, anticipating requirements over the next two to three years is far more complex.
The issue of who owns the intellectual property of the developed solutions is a further concern. If Local Government organisations are going to invest in developing applications to deliver digital public services, they need to know that they own the IP and won’t be held to ransom later down the line. They may wish to share their solutions with other authorities so that a greater social benefit can be achieved, so they need the freedom to do that.
While CDOs try to navigate this complex landscape, citizen expectations are rising all the time. Consumer-facing apps are setting the standard for intelligent, data-driven services and the public sector is under pressure to keep pace with services for both citizens and employees.
Navigating out of this complex maze
Here at OutSystems, we regularly see all these competing factors in our work with Local Government organisations. To have meaningful discussions with stakeholders about how to move their digital agenda from ambition to reality we have to guide CDOs through them and show how a low-code approach can help address these issues.
One organisation that has successfully negotiated these challenges is Worcester County Council. The team took a bold approach when it published its ambition to move towards ‘Digital by Default’, delivering 100% of public services digitally and aiming to become “a world-class digital council”. They knew that, to keep stakeholders on board, quick wins that demonstrated the value and cost-savings digital services could deliver were essential. To achieve results fast, the council used OutSystems low-code development platform to build its first app.
The first service to get the digital treatment was the council’s Copy Certificates service. The previous online system only worked properly half the time. It was a classic example of the “keeping the lights on” issue, needing extra effort from both staff and customers but not reliably delivering the service. The new app took Worcester’s existing team of six developers eight weeks to deploy. Customers apply and pay for certificate copies online, and can track the progress of their requests. Now, 70 percent of applications for certificate copies are made online.
The swift deployment and take-up by users delivered that quick win needed to prove the concept and get that stakeholder buy-in that CDOs often find difficult to achieve. The six-developer team has now delivered 53 apps covering everything from adult learning course registration up to an ambitious multi-agency data sharing project that aims to deliver a “single view of the child” to help provide the right professional support in safeguarding cases. Projected savings from the digitisation project are in the region of £2.8m over three years, providing ROI of 442%.
To push faster adoption of digital public services nationwide, Worcester County Council has published all its apps under an open-source licence, something it can do because it owns the rights to everything it develops using OutSystems. This means other councils can get digital services running very quickly, with minimal investment.
Clearing up cost control challenges
Cost clarity remains a challenge for the sector and I believe the responsibility lies with technology providers to make the platform economy work for local government organisations. We need to offer as much transparency and certainty as possible when we’re helping scope digital roadmaps. That might mean offering tiered pricing and budget ceilings so that local authorities know in advance the maximum financial commitment they are making. Fundamentally, the keynotes of successful digital transformation are flexibility and transparency, so our pricing should reflect that.
Pushing out the digital footprint of local councils has the potential to transform both the citizen experience and public finances if Chief Digital Officers can navigate the cultural, political and budgetary challenges they face. The key is the ability to deliver quick wins that get stakeholder buy-in and prove ROI, giving CDOs a mandate to fulfil their ambition for providing digital public services that deliver.
Virtual Desktop Infrastructure (VDI) - a technology that can make fully-personalised, individual desktop virtual machines with user profile control and golden imaging, is realising new levels of growth recently, with the global VDI market expected to be worth almost $5 billion by 2020.
By Alan Conboy, Office of the CTO, at Scale Computing.
This technology has peaked and troughed in popularity since it came into existence in 2006. On the face of it, the concept of VDI was simple and excellent: if we virtualise desktops we can reduce hardware spend, cut the three-year refresh cycle, simplify desktop management, and ultimately save businesses time and money. It all seemed straight-forward and, at a glance, the technology looked elegant—especially for the desktop user that wasn’t exposed to the back-end infrastructure.
On the flip side, for those who were exposed to it, the backend infrastructure was bulky, complicated, and costly. VDI software went hand-in-hand with hefty licensing fees and lock-in to vendor hardware, both of which pushed up adoption prices. Because of this, VDI adoption stayed limited to large enterprises for a long time.
But, in the past few years, edge computing and hyperconvergence have disrupted the VDI market. These technologies have made deployment opportunities available to more businesses. So, it seems that edge computing and VDI have emerged as a perfect pair, but how?
Streamlined IT teams
Rolling out VDI is simple and practical when utilising edge computing hyperconvergence solutions, even for lean IT teams that look after hundreds of users. There is no longer a requirement for specialist skills when just a couple of hours of training is typically all you need.
Once the virtual desktops have been rolled out, software and anti-virus updates for each user can be remotely managed and maintained. The centralisation and automation of various time-consuming day-to-day tasks help IT teams deal with emergencies better, should they arise.
Some edge computing systems also provide in-built, automated disaster recovery capabilities, including replication, snapshot scheduling, and file-level recovery that assist in the retrieval of lost files from individual virtual desktops. They can also protect the entire network with self-healing machine intelligence.
This centralisation and resilience means that IT teams can create a consistent disaster recovery plan that runs in the background, without individual users having to take any action. Ultimately, this ensures there is no longer a need to rely on employees to update their own anti-virus software or schedule backups of their own data. For even more redundancy, full network backups and snapshots of individual desktop profiles can also be sent over the wider network to either a remote datacentre or a cloud repository.
Working in a VDI environment also allows a user to simply move to a different machine and log back in, in the event of a terminal or other network access point failure and, in most cases, this means their profile and data remain undamaged as both reside on the edge computing unit. Plus, a replacement machine can be quickly configured without any need to perform time-consuming data recovery. This ensures they have an IT infrastructure with much higher availability and minimised risk of downtime, while making the IT team look focused, responsive, and streamlined.
Another advantage of running VDI in an edge environment is that data can be stored close to the point of creation and access. This reduces dependence on remote centralised servers or distributed local servers and solves the problem of slow connectivity, latency, and bottlenecks that have arisen on legacy deployments running over a WAN or VPN.
Safer and more agile
Running on a hyperconverged edge computing solution, the VDI deployment can provide improved workforce agility at an affordable cost. Employees can log-on securely to any machine on the network and gain access to their files, emails and applications—they’re not just limited to PC terminals, but can load their personal desktop or applications on their mobile device or tablet.
Running a VDI deployment in this way also offers a cost-effective and secure method to extend network access beyond the office walls and provide remote access to employees wherever they are. Plus, the IT team can receive automated alerts that flag potentially suspicious activity or log users out of an account that has been inactive for a pre-agreed time.
Generic sign-ins are often used across organisations as a speedy way for employees to access multiple machines, but they do pose potentially huge security problems. Whether it is doctors accessing patient records or retailers handling sensitive financial data, in both cases, companies are legally responsible for protecting end-user data. Failure to do so risks reputational damage and even prosecution.
VDI, in an edge environment, is able to deal with these potential issues by making it quick and easy for employees to log in across multiple machines with their own Active Directory credentials. This technology also offers multi-factor authentication to guard against unauthorised access, ensuring businesses stay compliant with consumer data protection laws such as HIPAA, GDPR, and PCI.
BYOD becomes cost effective to manage too, and no longer an issue that raises operational and security concerns. An employee’s personal mobile device, laptop or tablet can be integrated onto an officially-sanctioned VDI environment. This takes it from a potential security risk, to a secure, authorised, and monitored network device where information is protected from accidental disclosure and loss.
The future of VDI
While VDI always had the potential to realise improved workforce agility and centralised network management, IT analysts and industry experts stated for some time that VDI adoption has been relatively slow. This is where it was previously hindered by high cost, complicated software licensing and weak network connections. Now, with the advent of hyperconverged edge computing, and the accompanying reductions in cost and complexity, businesses of all sizes are able to benefit from this technology.
It is now easy for IT teams and end-users to adopt VDI, thanks to additional functionality, improvements to network response times, and massive simplification of system management. The ability to create offsite backups at remote locations or in the cloud has delivered an extra layer of IT resilience and protection, while automation and centralisation capabilities now take care of the most important, but time-consuming, heavy-lifting. Because of edge computing and HCI, it is no surprise that VDI has spiked in popularity, and this growth is predicted to continue in the future.
The August issue of Digitalisation World includes a major focus on Software-Defined Networks (SDN). How has the technology developed to date ? Are all SDN solutions the same ? What kind of uptake is there amongst end users ? And, how important is SDN as part of an overall digital transformation strategy? We asked a range of industry experts for their views on the technology.
Shamik Mishra, the AVP of Technology and Innovation, Altran, offers the following answers:
SDN technology evolved out of the need to move from specialised hardware for networking gear to a more decentralised management model where the control and the forwarding plane can be decoupled. It introduced programmability, policy based management and automated provisioning. However more recently, SDN has further evolved from a layered cake model where each networking domain like Optical, Label switch routers (MPLS) or L3 routers had their own control planes to a common “network operating systems” (NOS) across different domains.
This is significant because modern SDN networks can be part of a larger group of networking gear of pooled resources where applications can be developed to utilize a number of different SDN domains. This has given birth to an ecosystem of enterprise network applications, more flexibility in automation through intent-based management and the ability to provide visibility based on the application’s needs. It can also be linked to more evolved cloud orchestration platforms. Therefore the future of SDN lies in this model - where users have a choice, like a marketplace, to select applications and run them over a common platform.
2. Are all SDN solutions the same? What kind of uptake is there amongst end users?
SDN has not experienced hockey stick like growth figures. It has been steady and it took time for the technology to stabilise. It also struggled due to a lack of applications, heterogeneity in equipment and the slow pace of virtualization.
However, some SDN use cases like SD-WAN have matured and this has led to more proliferation of the technology. SD-WAN with general purpose universal CPEs and virtualized network functions have enabled key use cases for SDN where enterprises can provision a variety of network connections (over MPLS, 4G, etc.) by using self-service portals. This also has opened up new revenue streams for service providers and network equipment providers. On the other end of the spectrum, data center SDN has been steady with some major adoption seen with edge computing catering to residents, individual subscribers, SMBs etc. particularly for FTTx use cases.
3. Are there any SDN problems/challenges readers need to be aware of? How important is SDN as part of an overall digital transformation strategy?
SDN is critical for enterprises and their CIOs as they plan their digital transformation strategies for scale and bandwidth. Simplifying provisioning, automation and scalability are still challenges. Building new use cases where they leverage SDN’s ability to adapt and optimise networks still requires momentum. Most app developers are not network experts and will be interesting to observe how the SDN community utilizes such needs in the short and long term.
Boris Rogier, Director Enterprise Business Development, Accedian, comments :
Software defined networks are designed to provide cloud infrastructures with greater flexibility, linear scalability, security and automation, and can be applied to many different types of networks such as, Software Defined Mobile Networks, Software-Defined Wide Area Network (SD-WAN) and Software-Defined Local Area Networks (SD-LAN). SDNs can come in different forms, for example, as switch-based SDNs, whereby a software policy is applied on top of network switches, or software-based SDNs, whereby a virtual network layer runs on top of physical layers.
SDNs are a central piece of infrastructure that allows the move towards cloudification and digital transformation. They are a critical element of any cloud infrastructure for cloud providers, and are also an important part of the infrastructure for any modern application relying on containers.
While SDNs will play a huge role in the move to the cloud, they will also bring huge challenges to IT and cloud providers. For example, understanding SDNs will require teams to learn new skills, particularly programming skills which networking teams had never otherwise needed. There are also some key technical differences to running SDNs. For example, there can be loss in network visibility as SDNs encapsulate traffic from each overlay network to keep it separate from the rest on the underlay infrastructure. There can also be a heightened risk of congestion due to the multi-layer nature of SDNs, meaning that congestion can occur at any network level. As such, traditional monitoring and visibility tools are not fit for SDNs, they instead require another approach of instrumentation and visibility which is software-defined, lightweight, distributed and can be automatically deployed.
"SDN can be an architectural shift. If new parts of the network are going to an SDN approach, or non-critical parts, then the disruption can be minimized. Often enterprises assume significant risk by doing everything internally, without the right expertise or resources. This may work out well when SDN is being deployed centrally as in a data center environment, where there is better concentration of trained resources, but will be a challenge in the case of wide-area networks, where expertise and resources may not be available in a distributed manner. This often leads to risks, unpredictable deployment times and cost escalations. A better approach in this case would be to consider a managed service offering, which is especially helpful in distributed wide area networks."
Software Defined Networking (SDN), SD-WAN, and Network Function Virtualization (NFV)[MCF(3] are no longer a promise for the future; they are available today in solutions that form the foundation of network modernization efforts.
The software-centric approach behind SDN, NFV, and SD-WAN is a critical enabler of digital transformation initiatives. These solutions can dynamically adjust networks and infrastructure, with different locations utilizing varying bandwidth and VNFs. Reducing reliance on proprietary appliances across the enterprise sharply reduces the need for infrastructure management and makes it easier to streamline operations.
SDN-based security solutions have an important role to play in organizations that have or are planning to adopt an SDN approach to their system deployments within the enterprise and in the cloud. The new generation of applications will take advantage of better-informed SDN agents to improve policy enforcement and traffic anomaly detection and mitigation. These applications may be able to block malicious intruders before they enter the critical regions of the network. The biggest benefit of SDN-enabled security is that it presents an opportunity for intelligent response on a granular basis by selectively blocking malicious traffic while still allowing normal traffic flows.
Additionally, SDN security applications are capable of acting on anomalies by diverting specific network flows to special enforcement points or security services, such as firewalls and intrusion detection/prevention systems. Once implemented, SDN has the potential for achieving greater network security visibility and accelerating the pace of implementing new security services
As operators grapple with growing bandwidth demands and changing broadband service requirements, SDN is increasingly being used in networks to disaggregate various hardware. The industry has become well-versed in the benefits of this, with the technology boosting network agility, decreasing operating expenses and speeding up time-to-market of new revenue-generating services. These advantages have led to significant take-up of SDN within the broadband sphere and numerous deployments announced.
Despite this early uptake, there are still factors to address to ensure SDN realizes its full potential as critical to telco digital transformation. Many of today’s SDN solutions use proprietary technologies that are not compatible with wide swaths of an operators’ network. This poses an economic challenge for operators faced with the prospect of ripping out existing infrastructure. For SDN to be truly successful in the telecoms world and ensure a seamless migration path for operators to realize its full potential, interoperability and implementation standards must be developed.
An example of how important these standards will be, consider the Broadband Network Gateway (BNG). This portion of the access network has seen accelerating increases in necessary bandwidth due to the exponential growth in broadband demand and acceleration in video consumption across devices, as well as the incorporation of additional bandwidth-hungry functionalities. BNGs have also had to evolve to support new functionalities, such as management of multiple types of accesses, transport encapsulations and customers.
To address this, operators are being forced to deploy and manage multiple BNGs across numerous locations closer to the network edge to address load spreading. These factors have created challenges in control plane and user plane scaling, as well as geographical-related issues such as fragmented IP pool management, under-utilized control plane and complex operation and management for software upgrades and service provisioning.
By utilizing SDN, the BNG control plane can be separated from the data plane to centralize management and control of the network. This will bring benefits such as centralized locations for configuration and IP address management, leading to faster delivery of new services. The work will also ensure the control plane and user plane can be easily scaled according to customer demand.
Unfortunately, due to not being able to afford to rip out entire architectures or deploy multiple SDN silos, the integration and co-existence of physical and virtualized elements will become the norm for many operators, including a mix of both physical network elements and SDN. Broadband Forum’s BNG Disaggregation project seeks to provide another path for operators, providing an interoperable and standardised route to BNG disaggregation that is compatible with the rest of operators’ network.
Alongside open standards, rigorous testing of SDN will also be key to making it a tangible technology for operators to deploy on a mass scale. The sort of testing campaigns that will be needed can be seen at Broadband Forum’s Open Broadband Laboratories which validate multi-vendor interoperability of vendors’ and Communications Services Providers’ virtualized solutions and broadband access infrastructure aligning with Broadband Forum standards.
By definition, the food and beverage industry is fast moving and always changing; milk goes off, bread gets eaten, and we want to eat more salads in the summer.
By Filip Schiettecat, Senior Director Industry Management, Siemens Industry Software.
But the combination of changing consumer demand, increasing regulation, complex supply chains for global sourcing and distribution, and competition from agile new competitors means food and beverage companies have to move faster than ever. And they have to do all that while still keeping quality high and appealing to customers who want products that suit them perfectly – whether that means gluten free, low sugar or even custom flavours. Market predictions say there will be more changes in consumer packaged goods in the next five years than there have been in the last fifty. The only way to keep up is digitalisation, which is your best opportunity to reduce time to market, improve efficiency and get ready for the future of manufacturing.
It’s very common for food and beverage companies to be fragmented; that’s not just different departments like product development and marketing, but different product lines will come from different teams who just aren’t connected. If you make both pasta and cheese, they might end up in the same shopping basket and even on the same plate, but if there’s poor communications between the teams behind them you miss out on opportunities to save time and money by sharing information about suppliers and ingredients, process technologies and the shopper who’s buying both of them.
Even within the same product line, entering the same information about ingredients over and over again at different stages of the process means the chance to make a mistake every time that information gets recorded. Using a digital thread means information only needs to be entered once and can then be re-used at every stage of the process: from creating the first model of the product that allows you to calculate its nutritional profile, through finalising the product formula and recipe in the lab, to monitoring the whole manufacturing chain and tracking those ingredients from multiple suppliers all the way to your final customer. This kind of traceability is key for keeping products consistent and delivering high quality, as well as being able to comply with regulations and keep recalls as limited and transparent as possible. Managing all of this digitally also saves you money and time.
Driven by fashion, social media, broadening interest in different cuisines and increasing concerns about health and allergies, consumer habits and tastes are changing faster than ever. Shoppers want new and interesting products; they want high quality and low prices on everyday items. They want detailed information about what they’re buying, from where and how it was grown to nutritional and allergen information. They want to know if a fruit snack is made with whole apples, apple puree or freeze-dried apple powder. One day a consumer might be looking for products with alternatives to sugar; next week they might be concerned about artificial sweeteners.
They want more choice of products, in exactly the right pack size for them. They even want custom products, whether that’s a new flavour of an old favourite, a low-calorie or allergen free option or personalised packaging. Mixing up different soft drink flavours in a phone app, 3D printed chocolate souvenirs from your holiday or a pack of cookies with your name and design printed on them might be novelties today, but they’re a sign of the trend to more and more differentiation.
To meet those market demands, you need to develop multiple product variations, which may require different production lines – often more products and processes than you can create and test with traditional methods. Again, digitalisation is the way to cope. Creating a digital twin for new products and the processes you’ll use to manufacture them lets you design and simulate product performance to make sure every new variant meets compliance regulations and market needs before you even prototype it at lab scale, so you can scale up quickly, manage testing in production to make sure your production matches what was designed, and track which ingredients from which source make it into which product shipped to which customers. A digital twin can reduce how long it takes to get a new product onto the market by up to half.
At the same time, the price of raw materials is rising, as are the numbers of regional and global regulations. The EU, for example, is currently proposing rules to increase consumer transparency for ingredients like enzymes and flavourings, GMOs, food and feed additives, smoke flavourings and even food contact materials, as well as cracking down on companies who use the same advertising for products made with different ingredients in different countries.
Food and beverage companies need to track and label ingredients for an ever-increasing variety of products and recipes, coming from different suppliers and shipping into different markets around the world through multiple supply and distribution chains. Paper-based processes simple can’t handle the scale and complexity. The bigger your product portfolio, the more variance, volume, complexity or different country regulations you deal with, the more digitalisation can help. It’s also a key first step to prepare for the future.
That future is digital food: not the food pills of science fiction but wine, beer, cheese, meat and vegetables that have been tracked from the field and farm into the factory for consistent quality.
Using sensors and automation to handle the milk arriving in a dairy and the fermentation of the cheese it’s used to make can cut production time in half and reduce energy costs without compromising quality. Digitising cellars and fermentation tanks gives brewers better control over temperatures as well as telling them when they can decant a batch of beer and clean the tanks, given them more time to experiment with beer recipes and letting them brew more varieties of beer without needing to buy more equipment. Recording the pesticide levels in the vineyard and sending alerts if the temperature in a fermenting barrel of wine is too high or too low keeps costs down and the quality of the final bottle of wine up.
This kind of sophisticated monitoring and automation may be the exception rather than the norm today, but can you count on your competition not making those investments? To benefit from this kind of flexibility, consistent quality and reduced costs for your own processes, and to get your share of market opportunities that 3 billion new customers will generate by 2020, it’s time to start implementing your digitalisation strategy now.
Every summer, sportsmen and women compete across a range of disciplines to determine the victors for another year. Sport has come a long way in recent history, reaching a larger audience than ever before, and so have the systems and processes that provide insights to coaches and athletes.
By Neerav Shah, General Manager EMEA, SnapLogic.
Video Assistant Referee (VAR) and goal-line technology now marshal the football field, and advanced facilities with revolutionary functionalities now help to shape and train elite sportspeople. At the forefront of these new technologies is data analytics, providing unprecedented insights into athlete performance, transforming the fan experience, and improving stadium operations. The traditionally analogue sports industry now leverages data generated from measuring heart rate, lung capacity, weather conditions, courtside footage, social media commentary, competitor statistics as well as many other data points. It is important that we fully understand the capabilities and potential challenges of this technology if we want to truly experience its full potential.
Data-driven player performance
Data in sport is not a new phenomenon -- sports that have their roots in technological innovations, such as Formula 1, have long embraced the benefits of data analytics. Today, the modern F1 car is effectively an advanced computer system able to travel at speeds above 200mph. These cars represent the best in automotive innovation with hundreds of sensors constantly monitoring every facet of the vehicle, from tyre wear to engine performance, and then relaying information back to the driver and pit crew.
Tennis, a game steeped in history and tradition, has also seen the growing influence of data analytics on the court. IBM has employed this technology to analyse data points including player position, net clearance, and ball placement from every ATP tour-level singles match since 2005. When analytics is applied to this vast sum of data they are able to deliver their well-known ‘Keys of the Match’ that provide insights into the outcome of singles matchups.
Football clubs are able to make better signings from the recommendations that data analytics provides; Opta’s events coders started logging every shot and tackle in 2006 and today an Opta-coded match contains upwards of 2,000 data points. Other companies trying to leverage data analytics in football include 21st Club who rate a player’s time at a club compared to the overall performance of the team, from which they then assign the player a rating. Clubs use this model to help select signings that will provide the highest value to their club at the lowest cost. The value of data in the sports industry is undeniable and still growing. When the difference between success and failure is decided by a myriad of different factors, data analytics can be the difference that helps players and teams secure a win.
Engaging fan experience
As we develop better ways of monitoring our sportspeople and their performance, we also develop better methods for assessing and improving the fan experience. Crowd volume, commentator tone of voice, fan purchase history, and social media feeds provide perfect data points for analytics companies to examine and deliver insights. This information can be used to better edit playback reels and potentially enable their automatic generation. On the periphery of the game, analytics can be used to personalise and streamline the delivery of food and drinks before, during, and after sports events, using historical and real-time data to know where the pressure points are or will be during busy sales periods.
The challenges of utilising data analytics
Although the benefits of utilising data are clear, it can be a challenging task. Firstly, ensuring that the right data is being fed into analytics engines is essential to generate useful insights. The more factors that can be taken into consideration, the more holistic and complete the picture of player performance. However this also presents the challenge of handling even larger amounts of data, in its many forms, and in its many speeds. These large, complex sets of data need to be integrated, often in real time, to ensure that insights are delivered in a timely manner so the player or coach can take action before it’s too late.
Like any data-driven organisation, sports organisations need the necessary talent to handle and interpret these new data sets. Data scientists, who also have experience in sports, are in limited supply, mirroring a growing skills gap seen in other industries. As a result, many sports teams are investing in self-service, low-code technology platforms that can help make sense of the data, allowing a larger set of workers to meaningfully contribute to analytics projects without the need for advanced knowledge of data handling.
Organisations across all industries are turning to data analytics to give them an edge against the competition, and the world of sport is no different. Although tech-centred sports such as Formula 1 have long embraced data, it’s a race to the finish for those who’ve historically been less digitally focussed. The benefits of data analytics extend beyond the playing field and into spectator experience, delivering a more engaging fan experience. Data is also transforming stadium operations, helping to cut costs and generate new sources of revenue. To harness data’s true potential, organisations will need to equip themselves with the technology and expertise to be able to deliver game-changing insights that ensure their spot atop the leader’s board.
The August issue of Digitalisation World includes a major focus on Software-Defined Networks (SDN). How has the technology developed to date ? Are all SDN solutions the same ? What kind of uptake is there amongst end users ? And, how important is SDN as part of an overall digital transformation strategy? We asked a range of industry experts for their views on the technology.
Josh Flinn, Director of Product Strategy & Innovation at Cybera, offers the following answers :
How the technology has developed?
The Open Networking Foundation and OpenFlow were created in 2011 and that’s when SDN left the lab and entered the enterprise networking world. The basic concept is to separate software from hardware and the control plane from the data plane by taking a programmatic approach to network configuration. This allows for more dynamic data center configuration that can adapt to today’s shifting compute environments that are filled with virtualization and containerization.
While SDN offers many advantages over traditional network design and implementation, it hasn’t gained widespread support except from the largest technology companies such as Google and Facebook. This is mostly due to the skills required to implement the solution. SDN requires a deeper understanding of programming, scripting, and coding than most network engineers possess. Software engineers that do have the right skills lack the networking knowledge and experience to know what to program the network to do.
However, many great things have come from SDN. The adoption of the same concepts lead to SD-WAN which has become a transformative technology in distributed networks. SDN has also caused traditional network vendors to innovate to stay relevant or risk becoming obsolete. Finally, the short falls of SDN and the lessons learned have led to a better understanding of what the market needs and is the basis for Intent-Based Networking, the next major network technology that aims to transform the way we think about designing and building networks.
Digital transformation is a tricky thing to discuss in the context of a particular technology. Digital transformation has been used so much that the spirit of it has become lost. It’s not about just adopting new technology, but it’s about using technology to focus on your customers. Using digital signage to reduce the cost of printing menu boards and increase operational efficiency isn’t digital transformation. Using digital signage, wireless beaconing, and machine learning to perform targeted advertising to loyal customers is transformative. With all that being said, SDN (and by extension, SD-WAN) makes combining and adopting those technologies possible in a faster and more agile way, if you can find the right solutions and the expertise to implement them.
How has SDN technology developed?
Networks today are infinitely more complex than just a few short years ago. They used to consist of a few items of connected computer equipment, but today’s networks are a tangled web of wired and wireless connections, with data moving in every possible direction and changing constantly.
Industry 4.0 means that digital transformation is pervasive among both the private and public sectors -- but those who overlook the underlying network transformation as part of this process will regret it. Ideally, digital transformation should be enabled top-down through the application layer, as this serves as the interface between the applications and network, and bottom-up (platform and connectivity) in parallel.
It is this abundance of data and the need for that data to be tailored and transferred easily that has given rise to SDN. The technology has developed in response to a need for digital flexibility.
Are all SDN solutions the same?
Not at all. One model is the network virtualisation model, which eliminates the restrictions on local area network (LAN) partitioning that occurs in the Ethernet virtual LAN (VLAN) standards. Another SDN is the evolutionary model, which aims to enhance software control of the network and operations yet within the boundaries of current networking technology. A third model is OpenFlow, which gives the central control point of a network complete authority over how the network in question is segmented -- enabling traffic management, for example.
Also, SDN can often be conflated with SD-WAN. There are key differences between the two though; SD-WAN is an extension of SDN. Both share a common heritage, as both can be virtualised and both support security integration alongside WAN acceleration among other factors.
In short, while the same underlying principles power SDN and SD-WAN, they are not the same. SDN focuses on the internal network, such as the LAN or the core service provider network. By contrast, SD-WAN focuses on enabling connections between networks and users over the WAN.
Now, as well, the technology scope has moved beyond SD-WAN to a toolbox that links all services seamlessly from the data centre to the cloud. It is now possible to provide a scalable high-performance network with embedded security, centralised policies, and agile deployment. This underpins the new landscape of multi-cloud environments, including SaaS applications, private data centres, public cloud and remote user connectivity (including branch sites.)
How important is SDN as part of an overall digital transformation strategy?
Although SDN is clearly core to digital transformation, people need to look at transformation as a broader issue that requires a comprehensive toolbox of components -- SDN is definitely part of that, as it enables flexibility for data to move seamlessly, but it’s not a silver bullet.
No organisation is an island; each one will be working many SMEs and other large enterprises in the complex supply chains that fuel our economy. As such, the importance of a well-connected supply chain cannot be overstated -- which is why businesses should look beyond SDN.
The benefits of cloud technology are increasingly understood and embraced across countless industries. However, the cloud's full potential can only be accessed with a robust, cost-effective, and cloud-friendly network. Following on from this, as more users and applications arise, security becomes increasingly problematic and needs to be built-in.
SD-DP (Software-Defined Digital Platform) solves the above beyond SDN by integrating SD-WAN and SDDC with platforms, clouds, and systems to form an underlying enhanced network that enables data to flow freely yet securely between multiple clouds. In-built security can then monitor for risks and compliance to multiple standards.SD-DP underpins your digital transformation strategy from the edge to the core to the cloud.
How has the technology developed?
While SD-WAN has typically been delivered from proprietary vendor hardware, Universal Customer Premises Equipment (uCPE) is in line to be the new delivery model. It is an extension of the virtualization paradigm in server environments and data centers to network equipment at client sites. The increased flexibility uCPE offers for adding WAN optimisation, firewalls and more routing capabilities onto a single hardware device, means organisations can adapt to change much more quickly.
What kind of uptake is there amongst end users?
IDC predicts the SD-WAN market will experience a 40% compound annual growth rate by 2022. In fact, SD-WAN orders constitute more than 20% of GTT’s current order backlog and this figure is only growing, as companies seek providers that offer a managed service to support SD-WAN and give them the flexibility to work with different access and connectivity providers.
How important is SDN as part of an overall digital transformation strategy?
Today, many digital transformation initiatives struggle to get up and running because the network platform simply isn’t good enough to support a shift to the cloud. Adopting SD-WAN makes the network simpler to configure and control centrally and ensures that traffic takes the most direct, lowest latency path available to mission critical applications. Optimising the network to support cloud-based applications is fundamental to digital strategy, so this new networking approach has to be prioritised.
“SDN has become key to the realization of advanced network slicing capabilities and network forwarding programmability. SDN allows for superior network flexibility allowing for network architectures to be sliced into virtual elements. This enables network capacity to be allocated on a per-slice basis, according to service need. This has paved the way for new PaaS (platform as a service) offerings, within a single network slice, allowing more variety of services and verticals to take advantage of the network.
We’ve seen this manifest itself in a recent European-funded H2020 initiative, FLAME, which leverages SDN capabilities to enable faster access to media and new services for consumers. FLAME does this by optimising media content delivery over an SDN-based programmable network, allowing for reduction in latency, increase in bandwidth and lower cost of deployment.”
Software-defined ECDNs are able to achieve this because, in addition to not needing extra hardware of any kind, they are also able to use existing network infrastructures to meet organizational needs. Using this type of solution, companies can livestream video or send large video files to every employee, no matter where they’re based or what device they’re using, and still maintain network integrity. Being able to send visual content, like key messages from the CEO, to a large and scattered workforce is essential, but the execution of that process has always presented certain issues, including buffering. Fortunately, that is no longer the case.
While the word “software-defined” has been used in varying contexts, in essence, the term refers to taking control away from hardware and moving it into the software domain. Typically, when network managers wanted to amend network infrastructure, or mitigate problems, they had to make changes to actual hardware or equipment. With an SDN, network managers can circumvent all the troubleshooting and tinkering that they found cumbersome because the solution is controlled through applications. And because applications can be accessed from virtually any location, they can align with modern flexible working processes, integrate perfectly with the cloud and allow admins to perform necessary tasks in much less time.
Software-Defined Networks for Video
Across numerous organizations, access to messages from the leadership team is essential to helping workers understand the vision and strategy of the organization, to help them make smart decisions and to motivate and get the best out of team members.
At the same time, workers want to feel connected to management and business leaders want to build relationships with their globally dispersed workforce. The transformational change brought on by younger generations entering the workforce and their preferences for interacting with and consuming information only adds to this.
The demand for video communications in business is real and now, pushing enterprises to establish stable and scalable video ecosystems. With an SDN, this stream team becomes a dream team with live and on-demand video content being delivered seamlessly to all employees, regardless of their location or device, and without impacting the network.
Software-Defined Networks for Scalable Software Delivery
Enterprise IT teams need to understand their network, deliver content in a timely fashion and protect enterprise data and systems from constant security threats.
With the demise of Windows 7, enterprises must push to migrate legacy systems to Windows 10. Migration, along with the new Windows as a Service (WaaS) model is bandwidth intensive. WaaS means IT will have to deal with bigger, more frequent updates. Businesses will no longer be able to decline updates. Rather, they will have to prepare and install an update within 180 days.
This new update lifecycle establishes a baseline configuration for all Windows PCs on the network with the hope that this new policy will lead to fewer security breaches via vulnerable remote devices.
While the new update policy may seem daunting, an ECDN turns every networked device into a distribution point, moving bandwidth-intensive content from the WAN to the LAN. An ECDN helps IT teams manage the accelerated updates, distribute updates to all endpoints, reduces the need for hardware and frees up FTEs to work on other organizational initiatives.
Today’s consumers expect connected services everywhere they go – from shopping centres to festivals to train stations – and this is having a significant impact on how we work. A new tech-savvy workforce has become the key driver for technological advancements in the workplace, demanding the same connectivity and access they use in their daily lives.
By Matthew Beale, Modern Datacentre Architect, Ultima Business Solutions.
Many businesses are embracing the mobility and flexibility that today’s technology offers, enabling employees to work when and where they want, promoting satisfaction and increasing motivation. The office cubicles of old have given way to collaboration spaces, improving productivity and promoting knowledge sharing. Staff can work wherever they want and outside the confines of standard office hours.
While this shift has been welcomed by many, the proliferation of consumer devices in the workplace can prove a real challenge for IT departments, compounding the need for better control and increased security.
What’s more, companies are having to evolve quickly to meet the demands of a new generation of employees, while continuing to support those who prefer to work in a more traditional office environment.
With devices being used outside the four walls of the office, security is naturally a big concern for IT departments and is often a major barrier to embracing new ways of working.
Consumer devices, collaborative working practices and publicly available, non-corporate applications are not only becoming commonplace but also essential for productivity. This has led to ‘Shadow IT’ – an unmanaged set of applications, access methods, devices and working practices that have evolved through end users discovering ‘workarounds’ for overly restrictive corporate IT policies.
For companies keen to allow employees to work flexibly and collaboratively – with all the benefits this brings – a fear of how they can do so while protecting both equipment and data can, understandably, keep bosses awake at night. After all, humans make mistakes. None of us are perfect, and a lot of us are guilty of leaving something on a train or bus.
An increased reliance on laptops, mobiles and tablets has driven three distinct changes in how IT departments manage users and their access to data:
1. Two-factor authentication
While security is essential, it’s imperative this doesn’t hinder users from accessing data. Draconian security systems that lock users out end up being circumvented, so exposing businesses to more risk.
Securing user sessions is the first step, and you can do that by utilising two-factor authentication. One of the more commonplace examples is Windows 10, which comes inbuilt with Microsoft Passport – also known as Windows Hello.
2. Cloud-based file sharing and collaboration
It’s no use having a secure device that an outsider can’t get into if the data is lost when that device is misplaced. While file servers are great for in-office scenarios, they aren’t an elegant solution for the on-the-move, multi-device workforce of today. This is where we start looking beyond traditional solutions into cloud-centric technologies. Depending on the access and control requirements of the data, this might be OneDrive, ShareFile or a number of other technologies.
3. Machine learning security policies
The final piece of the puzzle overarches all of this: data security. Traditional security has been highly structured and log-file based, whereas modern security designs look at trends, baselines and correlations to provide AI machine learning-based insights into the security of the estate. It’s impossible for organisations to protect against every eventuality without destroying the productivity of its users and, instead, businesses are looking at how they can limit the number of threats they face while being able to efficiently discover and remediate problems as they come up.
A generational shift
As well as security concerns, another barrier to embracing the potential of intelligent workspaces is the challenge of meeting the needs of both a new generation of workers and those used to operating in more traditional ways.
For most organisations, technology is a necessity. However, the way it’s utilised varies massively across users. Generally, there’s a distinction across generations, but there are always those who break the mould.
From my experience, there are three main categories of users: ‘desk centric’, ’first generation mobile’ and ‘mobile first’. There’s also a new category of ‘social first’ users starting to appear in the workplace.
‘Desk-centric’ employees are generally the more experienced among the workforce and tend to hold the belief that work should be done at your desk, in your office, with a desk phone and landline number. They’re generally reaching retirement stage, but still account for a significant proportion of the workforce. Therefore, to enable these IT users to work effectively, they need to use devices that are familiar in feel. For example, a change from a desktop to a laptop with a docking station is a change this group would generally accept, but if their primary device became a mobile, they’d most likely be quite uncomfortable with such a shift.
First generation mobile
Those within the ‘first generation mobile’ bracket grew up in the beginnings of the mobile era. They will use mobile devices mostly when on the move, but still like having a ‘base’ to come back to. They can be a challenge to provide for, however being able to offer the same user experience across desk-based and mobile devices is the critical factor in satisfying them.
The ‘mobile first’ generation have now firmly made their impression on the workforce. They don’t expect – or even want – a desk. They want to be on the move, they expect continuous access to the data they need and they want it now! To keep them happy, mobile-enabled applications are a must. If you can only run an application on a Windows system they won’t accept that.
The newest subset of employees, this group represents a very small proportion of the workforce today but is growing rapidly. ‘Social firsts’ have grown up with social media – it’s how they’ve always communicated and organised activities, and it’s what they expect in a workplace. Therefore, enabling social style business applications and collaboration tools is key to engaging them.
A further impact of the rise of intelligent workspaces is the need to adapt IT strategies to suit. Rather than creating detailed technical strategies spanning three to five years, a more flexible approach is required for this disruptive landscape, allowing companies to constantly evolve and retain the agility needed to take advantage of innovative technologies and trends.
The best IT strategy is one with key long-term strategic goals against which smaller projects are aligned and adapted. Following the implementation of each phased project, an informed review ensures the business benefits were realised and provides a platform to look forward to the next phase, considering industry trends and technological advances to provide the most contemporary and cost-effective solution.
Today’s workspaces have the potential to be more collaborative, flexible and innovative than ever before and companies that fail to embrace this are likely to face struggles with employee retention as well as the ability to attract new talent. In turn, the organisation’s ability to compete with new and innovative digital disruptors entering the market is likely to be hampered.
Embracing new ways of working can prove daunting for many companies, but with the right strategy in place and the best tools to hand, there’s an incredible amount to gain. Welcome to the workspaces of the future.
Fighting cybercrime is a never-ending arms race. If businesses want to get ahead of the bad guys, joining forces will help them fight to win. Having access to the latest information from peers and subject matter experts will typically help all organisations, large or small, deal with any threat more effectively. Ultimately, the most effective deep defence against attack is collaboration, and it's at the heart of today's overall digital transformation.
By Victor Acin, Threat Intelligence Analyst, Blueliv.
It might mean doing something a bit counter-intuitive to reduce asymmetrical information on threats and vulnerabilities. This might mean becoming a lot more sociable, in a sense, and actually talking to each other. By sharing our knowledge and intelligence more often and more effectively, we can help organisations stay safe.
This is critical when few organisations today have access to sufficient resources to pay enough attention to their increased digital risk, especially in the cloud. They are tasked, rather, with becoming more strategic and efficient with whatever resources and technologies they have.
Digging in deeper
Meanwhile, cyber threats lurk within an ever-expanding array of business processes that are often not just taken online but handed over to third parties, sometimes in other countries, to manage, monitor and control.
This can include anything from document storage systems to remote working infrastructures and technologically-enhanced customer services and experiences. Cloud and mobile app-based services, IoT systems, AI, DevOps, big data and more have come together.
The answer to all this complexity has to be about going in deep, to proactively construct a responsive defence across multiple layers. It means paying attention to all the human factors too, from staff engagement to office productivity; almost two-thirds of breaches are said to be the result of human error, and we all know how complex those issues can be. Individual IT professionals need to talk to each other, often, and internal departments must communicate and work together too to win these battles.
Vendors must also collaborate much more often (as industry commentators have argued now for years), and the whole supply chain must work more closely with customers.
There's more still that can be done to achieve maximum collaboration: feed the growing need for technical alliances, and sensible collaboration between police forces or other government departments, such as the UK's Home Office, and other industries.
It's a truism that technology has given us an ability to collect vast quantities of different types of data, at high speed. Similarly, we need to make better use of it all. One obvious way is through collaborating and communicating what we know, working together to come up with the very best solutions.
Team up -- and win the war
Threat intelligence gathering and sharing can only be optimised by developing true operational coordination among a much broader range of companies and organisations, including security teams of all sizes using tools such as Blueliv's own Threat Exchange Network.
Targeting threats through a collaborative ecosystem surely cannot fail to reduce time to detection. That's the best way to close the gap in the threat life cycle. Long-term results will include better products and services to counter evolving threats, such as hyper-converged, integrated solutions that work easily out of the box. Even if a zero-day response seems impossible, things can, and must, move a lot faster -- and smarter.
Collaboration within the IT space in general has long needed improvement, and the arena of cybersecurity is no different. Basic cyber-hygiene and best practices will still be needed to combat the growing risks. However, the overarching, most important key to effective deep defence against cyber threats is collaboration. Let's go all the way and "socialise" cybersecurity -- it can be good to share.
The August issue of Digitalisation World includes a major focus on Software-Defined Networks (SDN). How has the technology developed to date ? Are all SDN solutions the same ? What kind of uptake is there amongst end users ? And, how important is SDN as part of an overall digital transformation strategy? We asked a range of industry experts for their views on the technology.
Open Ethernet and SmartNIC acceleration drive SDN adoption, according to John Kim Director of Storage Marketing, Mellanox Technologies:
Software Defined Networking (SDN) in the data center has been a hot idea since 2010. As networks grow customers want to increase scale and efficiency with the flexibility to deploy networking hardware from different vendors.
In traditional networks, every switch runs its own control plane in some combination of hardware and software. The packet forwarding rules are not directly accessible and must be managed through the switch vendor’s proprietary software. Networking policy changes must be programmed individually into every switch, and the applications and IT operations software team must build separate interfaces for each switch vendor they use.
Separating the Control Plane from the Data Plane
The increased use of DevOps, microservices, and container management solutions such as Kubernetes also means network settings and rules need to be updated more quickly and more often. The key is to separate the network control plane—with its management and routing rules—from the data plane that performed the packet forwarding. The OpenFlow standard, first released in 2011 by the Open Networking Foundation, allowed network controller software to talk directly to the forwarding tables of each switch, and to manage switches from different vendors using the same interface. But three more developments were needed to make SDN a success: 1) Adding SDN to the hypervisor; 2) Growth of Open Ethernet switches; and 3) Hardware acceleration of virtual switches.
VMware Acquisition of Nicera Pushes SDN into the Enterprise
In 2012 VMware made a surprise $1.2 Billion acquisition of SDN pioneer Nicera. This gave a centralized VMware network controller to manage hundreds of virtual switches running in VMware hypervisors. Besides bringing SDN capabilities to the most popular hypervisor at the time, it also gave SDN instant credibility within enterprise customers and the ability use SDN to network their VMware virtual machines.
Open Ethernet Helps SDN Flourish
Open Ethernet popularized switches that could run multiple network operating system (NOS) choices, supporting them through standardized APIs like the Switch Abstraction Interface (SAI) and SwitchDev. For example, Mellanox Spectrum switches can run Onyx (the Mellanox NOS), Cumulus Linux, and Microsoft’s SONiC OS, in addition to other Linux options. Customers can buy the best available switch hardware and manage it with one SDN controller and choose the NOS that best supports their SDN controller and strategy. Extending the above example, one Mellanox switch customer might feel Cumulus offers the best support for their SDN controller, while another chooses SONiC to integrate with a different SDN controller.
The Best Network Adapters Accelerates SDN Virtual Switches
In addition to support in the physical switches, SDN supports virtual switches that run in the hypervisor or on commodity X86 servers, such as Open vSwitch (OVS), an open-source, multi-layer switch. These can run almost anywhere and take direction from different SDN controllers. However, software switches lag in performance and drain CPU cycles from the server, reducing the number of applications that server can run. The most elegant solution to this turns out to be SmartNICs that offload the OVS with hardware acceleration on the NIC. For example, the Mellanox ConnectX and BlueField SmartNICs can increase packet forwarding rates up to 5X compared to a standard software OVS, as well as lowering networking latency and consuming less server CPU. This means better application performance and more applications running on each server, since the SmartNIC has offload most of the networking work. As a bonus, SmartNIcs like the Mellanox ConnectX talk directly to SDN controllers such as those from VMware or Nokia Nuage, and the BlueField SmartNIC can even run SDN controller software on its built-in Arm cores.
Best of both Worlds for SDN
SDN is an important part of any Digital Transformation strategy because it allows for more flexible networking, leading to faster and more efficient application deployment. It turns out that the best strategy is to have your SDN controller talk to SmartNICs and Open Ethernet switches through open interfaces, such as Open vSwitch and OpenFlow. That gives you a network that is both software-defined and hardware-accelerated, resulting in the highest levels of flexibility, performance, and efficiency, without risk of being locked in to a proprietary architecture.
Philip Griffiths, Head of EMEA partnerships, NetFoundry, comments :
SDN principles can be traced back to the separation of the control and data plane in the early 2000’s in the public switched telephone network but is commonly associated to later work with the OpenFlow protocol and Nicira in the 2010’s. Since this time SDN has moved at various paces related to which area of the network we are looking at.
The initial focus was on virtualising networks for East-West traffic in the data centre and later with the advent of SD-WAN North-South traffic across internet/private networks (or a mixture of both). While SDN has been used prestigiously by the hyperscale providers (e.g. GCP, AWS, Microsoft), its take up in the enterprise has been slower but gathering pace.
While we have recently seen a lot of development with SD-WAN and how it can reduce costs and improve application performance for branch connectivity, we have also seen the complexity and challenges which come with it. SD-WAN is not designed for internet-first WAN architectures and therefore supporting the sheer complexity of cloud-driven traffic which has led to support the sheer complexity of cloud-driven traffic. The network node has to support vast numbers of new applications and massively growing business units as organisations expand, diversify and respond to the new problems and workloads such as the Internet of things (IoT).
Rather than replacing legacy networks with SD-WAN in a big-bang approach, which can cause application access issues, organisations want to be able to move progressively to massively distributed, portable and dynamic application architectures using internet-first or internet-only as a progressive journey to being cloud and devops-native themselves. Traditional SD-WAN does not solve top networking issues such as deploying applications to the cloud, hybrid cloud set-ups and Industrial IoT devices in way that is on demand and gremlin-free.
While the IaaS market has been growing due to its ability to provision resources quickly, the network as we know it is losing out. Netfoundry addresses this problem by helping enterprises to deploy zero-trust application specific networks across multi-cloud and multi-edge environments.
NetFoundry has been able to move beyond the traditional SD-WAN formula by creating a cloud-native platform which changes how the world connects applications. Their Zero-trust Cloud Native Connectivity Solution turns the ordinary public internet into a secure and performant enterprise-class network by enhancing it with next-generation zero-trust cybersecurity, while at the same time boosting "best effort" Internet resilience & performance. This extends permission-less innovation to networking (similar to how public-cloud extends permission-less innovation to creating apps), putting programmable app connections in the hands of the practitioners while giving private network benefits without private network hardware and wires. A core component of this is the virtual SDN infrastructure NetFoundry has deployed over the public internet which is abstracted from the end user to increase agility and simplicity while enabling users to benefit from increased performance and security. This allows businesses to instantly connect a fully software-only application-based networks in cloud and IoT ecosystems but also to achieve 3 to 5 times higher performance on their underlay network. For example, NetFoundry provisions Internet-overlays with 5 layers of security built in to keep enterprises safe from DDoS attacks while connecting them to any provider, such as AWS or AZURE, from anywhere, anytime using any underlay internet provider.
Many organisations still have a naïve belief in their traditional technology and either the fear of failure, or lack of imagination, drives them away from adoption of software-defined network technology. Early evangelists are few and far between but they are reaping the benefits of the new networking paradigm and will shape the future of networking in the same way that it transformed the cloud-scape over last decade.
A comment from Andrew Halliwell, Product Director, Virgin Media Business, on SD-WAN :
“SD-WAN is the “Netflix of networks”. It’s disruptive because it’s flexible, adaptable and empowers network managers like never before. It enables enterprises and public sector organisations to rapidly scale and alter their networks in line with their needs, so when they experience rapid growth or suddenly find themselves inundated with customer demand they can quickly adapt their network traffic and keep vital systems running. It gives network managers the insights, analytics and visibility of traffic they need to prioritise key applications and it gives them easy-to-use tools that empower them to do their jobs as effectively as possible”.
“SD-WAN is also inherently secure as it uses encryption to protect network data, a vital characteristic in an age of evolving cyber-threats. It’s a transformative technology that gives large enterprises the insights to evolve and grow their networks, gaining an edge over their competition”.
“While SD-WAN is an exciting, transformative technology, there are some challenges to consider. First, many CEOs and public sector leaders sometimes overlook the importance of having an agile network, which means there can sometimes be a general apathy about investing in new IT solutions. There are also several myths that continue to persist; for example, the idea that users can get the same benefits by designing and managing their own SD-WAN, and that SD-WAN can be easily configured; both of which are false. A strategic partner will help deliver the solution, carefully assessing an organisation’s needs and incorporating SD-WAN seamlessly to deliver enterprise benefits.”
In this article, Steve Smith, VP, Strategic Industries at ClickSoftware explains how IoT and predictive maintenance can help you increase field service efficiency and productivity, decrease tech travel time and reduce charges incurred from missed SLAs, while giving your customers the service they expect.
Everyday your field service technicians are in the trenches resolving customer issues. What if you could help them resolve more issues on the first visit, reduce idle time, improve customer satisfaction and gain a competitive advantage? The Internet of Things (IoT) and predictive maintenance are reshaping field service as we know it today. Field service organisations that have implemented IoT and predictive maintenance are resolving problems faster, sometimes before they even occur, and gaining operational efficiencies that positively affect the bottom line.
Limit the unexpected with IoT and predictive maintenance
The old adage expect the unexpected can never be completely mitigated, even with all the technology available today. Whether the result of a major weather catastrophe, equipment failure, or security hack that disrupts business continuity, unexpected events will still happen. Although we can’t predict every issue before it arises, we can have mechanisms in place to be as proactive as possible in sustaining equipment health, and minimising downtime, which is where the Internet of Things (IoT) and predictive maintenance come into effect.
The Internet of Things opens up hundreds of cross-device possibilities and efficiencies in service. By bringing machines, devices, vehicles, and equipment online, service providers can effectively close huge communication gaps that currently exist and resolve customer issues faster. When combined with predictive analytics, the IoT empowers organisations to get ahead of problems and the impact felt by customers should a disaster strike.
Shifting IoT from novel to practical
Although IoT in field service is still relatively new, early adopters like manufacturers of capital equipment are approaching greater maturity. They’re leapfrogging other industries in terms of first-time fix rates and overall operational efficiency. Other industries are beginning to recognise the potential benefits, and conversations around IoT are shifting from wide-eyed wonder to practical next steps.
Utility and telecommunications providers are well positioned to benefit by making the infrastructure they maintain smarter and more connected. With the IOT, these industries can better empower their customers to participate in diagnosing and repairing problems. For example, the Smart Meter initiative enables consumers to have greater visibility into their energy consumption, giving them the option to alter habits to reduce their carbon footprint and energy costs.
While it may sound simple, monitoring thousands or even millions of pieces of equipment and identifying errors before they occur is a complex science and can actually cause more confusion than help for an organisation. That is why it is imperative that field service organisations identify their own key maintenance indicators and setup alerts based on their particular industry best practices and business priorities.
Determine Your Data Strategy
IoT sensors allow for large volumes of data to be collected and stored, providing companies with insights into equipment status, creating potential business opportunities, and enabling financing decisions. Every unit can generate hundreds of thousands of data points per minute. In fact, Cisco states that It would take a lifetime to manually analyse the data produced by a single sensor on a manufacturing assembly line. Therefore, the imperative for your whole organisation, not just the field service group, is to identify the key performance indicators that impact your specific group so you can make the most of all that data.
For example, a telecommunications provider monitoring a mobile phone tower may be tracking multiple forms of data such as the volume of calls being picked up through that specific tower and the number of dropped calls. Those two metrics have a different meaning depending upon the hat you wear within your organisation. For instance, an increase in call volume might be important to your strategic planning group as it may indicate the need for a new tower which requires a business case including financing options. However, monitoring the number of dropped calls could be a priority for the field service organisation as it is an indicator of a problem.
From this example, it is clear that a KPI that matters to one group may not have an impact on another group’s priorities. Field service organisations need to develop a set of key metrics that specifically impact your results and consistently monitor for when something is askew.
Execute on Your Data Strategy
Once data is collected and key monitoring metrics are defined, predictive analytics are applied to convert the data into actionable, useful information. Inventory, service scheduling, and even customer satisfaction can all be predicted. That is of course, if you have access to the right data.
Predictive analytics requires tracking historical data and leverages statistical algorithms, or machine learning techniques to identify the likelihood of future outcomes. Analysing historical data, particularly around equipment failures and past service activities allows service companies to identify patterns that might indicate a future error, and proactively address issues before they become bigger problems.
For instance, in a utility power plant, temperature is one of the most widely measured parameters, as overheating can cause serious damage to equipment, and can be a threat to the safety of the service professionals performing repairs to the equipment. By looking at past maintenance activities and patterns in temperature changes, utilities have the necessary insight to schedule preemptive maintenance when temperatures rise to levels that have led to failures in the past.
Increasingly, we're seeing more applications of IoT and predictive analytics in the service industry with sensors embedded in devices and equipment. But with greater connectivity comes greater responsibility; organisations must leverage these enhanced capabilities to transition from reactive to predictive service programmes to address breakdowns before they occur — ultimately maintaining customer satisfaction, today's number-one business priority.
Predictive maintenance: The field service opportunity
Diagnosing and addressing issues before they happen is key to saving time and money on service calls. And as more customer equipment is embedded with sensors, the opportunities for predictive maintenance will likewise increase.
But there’s an important distinction to be made between preventative, and predictive maintenance. Preventative maintenance means performing service tasks at regular intervals to ensure no major breakdowns occur and to comply with regulations.
Whereas predictive maintenance means using data-driven insights to better understand equipment, and predict when specific parts might fail, or the equipment should be replaced. When using IoT sensors and data-driven insights, predictive maintenance can completely revolutionise a field force by delivering more accurate parts performance reports, equipment lifecycles, and more.
The era of smart field service is here
Resolving jobs efficiently and quickly is key to achieving field service profitability. But often field technicians are unable to obtain necessary details for achieving swift resolution prior to arrival. The result? They return to job sites more than once.
The Internet of Things is helping to eliminate the need for return visits because you have all the information needed about the problem, and thereby first-time fix rates improve drastically.
By embedding smart devices in the field and identifying key performance indicators across a number of scenarios, field service management organisations are embarking on a new era of smarter field service. Field-based equipment can now send service signals immediately, and log performance data in real time. This means the need for service calls will soon disappear completely. In its place, machines will call for service before the customer is even aware there is a problem.
Where to implement IoT first
1. Embed sensors on equipment: The first and most obvious application is to focus on equipment that needs regular maintenance. Consider embedding temperature, pressure, or other sensors on key pieces of equipment enabling them to communicate for help when thresholds have been reached, instead of customers discovering and reporting equipment issues to service providers
2. Bring vehicles online: Effective field service requires technicians to remain efficient both on the road, and at the job site. By equipping your service vehicles with sensors, you can monitor vehicle wear and tear, and set alerts for regular required maintenance. This ensures you optimise when you take these vehicles off the road, rather than at times that will take away from the daily productivity of the field technicians.
3. Enable wearables: Empower your technicians with wearables such as smartwatches that monitor biometrics. This improves employee health and safety and alerts the back office if a technician finds himself in a potentially dangerous situation.
The advantages of IoT adoption
Many technologies hold transformative potential for field service providers, but it’s short-sighted to either hail any emerging technology as a panacea or wholesale dismiss it as a bad fit. With IoT, as with any technology, the key to getting real value is understanding which problems need solving, then finding the solution to match.
With IoT your service organisation can:
The urgency around IoT adoption varies from one vertical industry to the next, but the cost of service delivery and the need for greater visibility are universal concerns. The old business adage “you can’t manage what you can’t measure” certainly applies. The business benefits of predictive maintenance are powerful and can include an increase in the number of jobs per technician per day or a significant reduction in critical failures.
Investing in the appropriate data framework, infrastructure, and processes is essential to fully leverage the potential of IoT. Although it will require rethinking your operations and the role service technicians play, if executed properly, predictive maintenance will deliver dividends in the form of measurable ROI.
With IoT, field service organisations can not only increase efficiency and productivity, but slash the cost of missed SLA penalties, directly impacting customer satisfaction and profitability. IoT can deliver the operational gains needed that positively affect your bottom line and provide the kind of customer experience that fosters retention.
There are an estimated 1,600 AI startups just in Europe, not factoring in the AI initiatives in large tech companies, so the wait for new AI graduates remains long. Microsoft has recently announced the goal of training 15,000 new AI professionals by 2022, which is a good start but not enough to fill the estimated millions of roles that are currently vacant.
In a recent study, Microsoft and IDC found that the shortage of workers with AI skills has stopped companies that want to adopt AI from being able to do so. Until more highly skilled AI developers enter the workforce, organisations must find creative ways to supplement the talent they need to initiate their AI projects across industries—whether those projects involve voice, image, or pattern recognition, enabling autonomous movement or simulating realistic conversations. These innovations can underpin a new generation of healthcare tools, smart home devices or digital personal assistants. Instead of solely searching for new graduates, more companies need to start actively investing in training current employees and in hiring people with AI-adjacent skills.
A misguided focus on graduates
Many organisations assume they should be finding students or recent graduates to meet their AI needs. SnapLogic conducted a study that found that 49% of IT decision makers believe recruiting from universities is important to getting an effective AI team in place.
However, a study on global AI talent by Tencent suggests education is actually the bottleneck. Most universities do not yet offer AI-specific training courses. The University of Oxford has announced plans for a new college that will recruit hundreds of post-doctoral researchers and graduate students each year in several areas, among them AI and machine learning—but courses will not start until at least 2020, and PhDs take several years to earn. In universities that do have AI research and teaching programs, top professors are being recruited away from academia, making it harder to find staff to teach new students.
To build an AI team, companies will inevitably have to set up training for existing employees. SnapLogic found that this is happening to some extent at 68% of surveyed organisations, but considering that 93% of organisations said they are fully invested in moving forward with AI, this number should be higher.
The importance of developing skills in-house
Companies in niche areas and smaller organisations should not be too rigid in hiring preferences. Part of this requires not looking for people who already have AI experience and instead hiring employees that show the hard and soft skills that could eventually make them valuable members of an AI team. These include flexibility and enthusiasm for learning, complex problem solving skills, and a solid grasp of mathematics, data visualisation and analytics, and/or security.
At Diffblue, we are lucky to be based in Oxford and in commuting distance from London; we have a pool of talent from varied backgrounds. But even here, there isn’t an abundance of readily available AI experts, which is why we started investing early in hiring talented computer scientists and developers who may not have ever worked in AI before, and training them to work with our AI engine. We expect that new team members will spend three to four months learning the inner workings of our technology before they can actively contribute to new development, but the investment has been worthwhile.
Putting effective training in place
The current barrier to training might be that the majority of companies that are willing to invest and retrain workers lack an understanding of where to start. Like the shortage of AI degree programs, the shortage of AI training programs for businesses stems from the lack of teachers. However, scalable learning solutions exist.
If you don’t have any in-house AI experts already available to help train new hires, consider offering your staff the opportunity to take online courses in AI or machine learning. Encourage collaboration and co-learning. A team of developers who know how to use these techniques will be more likely to come up with ways to apply them to innovative new features for your business.
Education in AI is critical, and it doesn’t have to be done solely in universities. Organisations can and should be investing in internal training and skills development programmes to meet the growing demand for AI talent and to ensure their workforce is evolving with their business needs.
By Filip De Greve, Product Marketing Manager at Nokia.
The bread and butter for the network operator is broadband access, but this daily bread is no manna from heaven. Operators face an overwhelming number of day-to-day tasks to deliver the best possible broadband experience. The human factor – how people perform management tasks and work together to keep the network running smoothly – is what gets the job done. However, with increasing service level expectations and network complexity, the operational undertaking will soon become unsustainable without the right set of tools, such as automation and artificial intelligence.
The software-defined access network (SDAN) gets us closer to this goal of applying more automated and intelligent control over the network. I see the broadband industry as being halfway in a four-phase journey with SDAN, where openness is indispensable for a successful digital transformation. We started in a world of scripted, proprietary, network-centric systems, that came with isolated point-and-click management systems, which are extremely inflexible and where every FCAPS action needs endless human intervention. We have progressed into the second phase where we have coalesced around the industry-wide adoption and standardization of open APIs for network management, such as NETCONF/YANG. We’ve put a lot of effort into open initiatives like OB-BAA and standardization bodies like the Broadband Forum to ensure that we have a common set of data models to work with. Standardization and interoperability are fundamental to realizing the benefits of truly open and programmable networks. They save resources for both operators and vendors, as they are freed from endless integration and testing cycles and can focus instead on innovation and customer value.
At this stage, the programming is still very imperative (if x, do y). However, we’re now heading into the third phase, where we can operate networks in a more intuitive way. This will see us applying powerful intent-based automation where the operator defines policies and the network will be able to self-adjust, automatically finding misconfigurations and fixing them via audit facilities, synchronization processes and closed-loop automation. This is where the rubber hits the road. Intent-based networking is ideal for automating repetitive and dynamic processes that are a drain on a service provider’s resources.
The final phase is a self-aware, self-governing, data-driven network that applies machine learning and artificial intelligence to automate operations, predict and prevent issues, and provide detailed analysis for anomaly detection, action recommendation and network capacity planning. The machines have finally taken over – at least the mundane, day-to-day activities. It allows operators to manage service availability from a business perspective and deliver better insight into the customer experience, while the need for detailed inspection by human experts is reduced.
Whether service providers’ interest in SDAN is strategic, perhaps for fixed-mobile convergence or cloud agility, tactical to mitigate risks, or commercial to enable new business models, it is always pragmatic. There is no intention of risking current services and every desire to protect the investment they’ve made in their networks. Their approach to SDAN, therefore, is based on clearly defined and measurable use cases that can be integrated into their current environments, for example always-on networking, zero-touch provisioning, E2E automation, multivendor interoperability and network slicing.
We’re working closely with operators and especially their OSS/BSS teams as even the first steps in adopting SDAN require a major transformation in the way operations are run. It will be the commitment to openness and the application of SDAN to solving real-world use cases that will set solutions apart and take SDAN from virtually promising to pragmatically valuable.
Sascha Giese, Head Geek at SolarWinds, offers the following observations:
1. What kind of uptake is there among end users and how the technology has developed?
Increasingly today, enterprises of all kinds are turning to software-defined networking (SDN). The versatility of multicloud, paired with the freedoms granted by unshackling from a single infrastructure-as-a-service (IaaS) offering, has acted as enough of an incentive for enterprises to make the switch to centralised and more dynamic SDN environments.
Adoption shows no sign of slowing—according to IDC, the SD-WAN Infrastructure Market is poised to reach $4.5 billion in 2022—that’s a 40.4% compound annual growth rate within five years.
Despite its vast potential for the industry, implementation poses new problems. SDN environments are specialist and require specific approaches and tools when it comes to monitoring. It’s in the best interest of all enterprises to establish these systems and processes from the outset. Doing so will help massively down the road, when SDN stops becoming the next big thing—and becomes the accepted status quo.
It’s easy for IT pros to become complacent in the face of the potential benefits of SDN. All pros should be conscious, however, that these networks are incredibly complex and present new and unique challenges that will need to be managed. Understanding how to deal with these challenges will be vital for IT pros looking to sufficiently monitor the networks of tomorrow.
2. What are the challenges that need to be considered and overcome when adopting SDN?
With adoption of SDN so high, it’s vital for enterprises to start weighing up how such big infrastructure changes are going to affect the day-to-day monitoring of network performance and security. These enterprises would be wrong to think that these changes can be bolted on without warranting any change. Maintaining existing monitoring systems and practices without audit will leave IT pros blind to the fatal holes their new SDN environments may have opened up in the network.
SDN is more than a simple rejig of the traditional IT infrastructure model—it changes the entire paradigm. SDN environments are intelligent and flexible, meaning services and devices can enter networks almost instantly, and, if the right tools and processes aren’t in place, often without IT knowing. This renders traditional daily security checks obsolete. SDN environments are constantly changing, so effective visibility will only be achieved through dynamic, real-time monitoring.
Multiple tunnels and layers also make the performance of SDN environments difficult to monitor. Basic bandwidth reporting is less effective than in legacy networks. In fact, purely measuring bandwidth can hide instances where an individual vendor’s cloud service is failing and causing performance issues in the entire network. A strong bandwidth reading on the control panel could still equate to connection issues for a frustrated end user.
This is why it’s so important that, from the outset, IT pros are looking to monitor SDN environments with vendor-neutral performance management and development platforms. A key part of this is ensuring performance monitoring contextualises the entire network, and the functionality of the vendors and cloud providers within it. Metrics such as log and flow analysis and deep packet inspection are much better suited to measure the performance of multi-vendor networks than basic WAN indicators.
The industry itself is also working to solve the challenges posed by multifaceted environments. Just this month, Oracle and Microsoft announced a new partnership that will see Oracle Cloud and Azure directly connect—enabling users to move workloads and data seamlessly between the two. Close collaboration between cloud service providers is becoming increasingly common—this can only improve the situation for IT pros, tasked with monitoring an increasingly complex web of hybrid and multi-cloud environments.
Steve Lawrence, Enterprise Director, SSE Enterprise Telecoms, says:
Much like we saw with the slow early adoption of the Cloud, it is thought that Software Defined Network (SDN) adoption rates will go on to excel at a great pace as businesses continue to develop their understanding of this next generation technology and start seeing their competitors make movements in the market.
SDN is an approach to networking that allows network administrators to manage services from a single, centralised control system. Through this, admins have greater flexibility and increased management of the business’ network, which ultimately leads to a more agile and cost-effective way of adapting to changing business
Transport Software Defined Networks (TSDN) is a particularly exciting SDN solution because it means businesses can respond to instant market challenges. An optical TSDN product enables the seamless management of a business’ core network and provides a response to capacity demands in real time. Look at Black Friday, for example. If a retailer can predict that their network is going to need significantly greater capacity over that timeframe, this could be monitored, managed and optimised in real-time to make the most of a time-critical opportunity.
A more tangible current focus for business is Software-Defined Wide Area Networks (SD-WAN). For global companies, or those with ambitions for growth across various geographies, flexible IT platforms are key. SD-WAN can meet this requirement, whether it’s for expansion or for merging existing IT systems and networks as part of an acquisition. If deployed correctly, SD-WAN can significantly enhance network capability, in addition to providing a network with a higher capacity for self-optimisation, due to its ability to re-route traffic over multiple network resources.
For multi-site businesses, driving digital transformation requires them to embrace new technologies and they need to be underpinned by secure, agile and efficient networks. When installed at a branch site, SDN enables network managers to have complete visibility over their network estate, enabling increased control, flexibility and agility which in turn drives efficiency and higher productivity. In modern workplaces, employees depend on reliable, continuous access to digital applications. SDN enables networks to be deployed, managed and optimised centrally, meaning quicker time to market and, ultimately, quicker time to revenue.
SDN is still very much an area of growth, and the future of network architectures will be shaped significantly over the next few years by it. What is certain is that a focus on the documentation and templating of new network structures has been addressed, and once ownership of the physical layer has been clearly established, SDN can then become the reality that’s been predicted. This will have a significant positive impact on enterprises across the globe - providing, of course, that all the relevant industry players can work together to develop this.
To thrive in the age of continual digital change, businesses require networks that can easily adapt while enhancing productivity – exactly what SDN offers. With the improved visibility, agility and control offered by this solution, enterprises can move forward with the business of transformation, whether that’s introducing new connectivity solutions, onboarding new applications, or integrating with public and private clouds.
It’s no secret that companies in all industries - from healthcare to financial services, to retail and everything in between - are amassing large quantities of data and evolving capabilities that need serious computing power. To cope, more and more businesses are deciding to shift servers to data centres outside of their organisations, unlocking benefits such as infrastructure flexibility, better recovery options, improved collaborative systems, workforce mobility, ease of access to public cloud operators and an overall lower total cost.
By Darren Watkins, MD for VIRTUS Data Centres.
If the data centre challenge is no longer “build vs. buy”, then the question for many has become “which partner do I trust?”, and “where are their facilities located?”. Indeed, arguably, “location” is the key consideration when choosing a data centre partner.
There are many important factors to consider when choosing a partner - physical security, disaster recovery, data centre uptime guarantees, service levels, ongoing support and maintenance. Companies need to strike a balance between location being an important component of choice, and making it the only concern when looking for a data centre provider.
So, how do they get it right?
Understanding the basics
For UK companies storing data on servers located in the US, the data has to travel across the Atlantic whenever it's accessed by customers. That data won’t reach a customer's device as quickly as it would if the data centre were located just outside of London. This difference in speed of delivery has a huge effect on customer satisfaction and loyalty; surveys consistently show that internet users are quick to drop sites with slow page load times as people want access to data instantaneously.
When it comes to Search Engine Optimisation (SEO), location is just as important. Though the geolocation of a company's server is not the primary SEO factor to consider, it’s an important part of the equation. If a business mainly operates in the UK, it will benefit from better SEO if the data is also stored here.
Many organisations are rightly concerned with meeting the data protection laws that are beginning to crop up all over Europe; GDPR means that UK companies are required to provide adequate protection of all customer data they collect and store. This includes not transferring data outside of the European Economic Area without adequate protection.
Brexit confuses things further as no-one knows what exiting the European Union will do to the data transit rules. There may be some cases in which a country's laws require that certain types of data must be hosted domestically. There may be other occasions in which hosting data in one country is not appropriate if that data is accessed by users in another country. So, when selecting a data centre, companies always need to take data protection laws into account.
There are a number of additional factors to consider. These include local tax structures, access to utilities, availability of suitable networking solutions, local infrastructure, the accessibility of a skilled labour pool, track record and existing customers or reference clients.
All these things combined make it very clear that the physical location of the data centre is important. However, it’s widely discussed that, in cities at least, space is running out. There has been a boom in “mega data centres” to cope with increasing demand and worldwide data centre space is now at a whopping 1.94 billion square feet in . This lack of land availability and increasing costs have seen data centres moving into the suburbs.
But, it isn’t simply a matter of building on the edge of large cities to get best performance at the lowest cost. A knowledge of the national power infrastructure is needed to future proof any investment; an in-depth understanding of where the fibre operators’ networks exist and being able to provide an “on-ramp” access solution to public cloud platforms is critical to any enterprise deployment today.
At VIRTUS, we took the decision to build data centres in an area around London, which we termed the “Goldilocks Zone”. This enables us to combine lower-cost availability of ample space and power for hyper efficient data centres with the availability of broad and rich connectivity - fibre that digital businesses need. These facilities are far enough from city centres for disaster recovery purposes and avoiding expensive city centre premiums, but close enough to be easily accessible by local and international businesses.
However, location must not be the only deciding factor when it comes to choosing a provider. We’ve touched on physical security, disaster recovery and data centre uptime guarantees - flexibility is equally key. The provider must be able to meet your needs not just now, but for at least several years following selection. So, the savviest firms prioritise the availability of additional space, power and connectivity, and are careful not to hinder their business by choosing a provider that can't scale over time.
Deployment efficiency is important. How fast must the infrastructure be up and running? How quickly will a new cross connect or additional rack space needed in the future? Most businesses want their new space set up as quickly and efficiently as possible, and whilst deployment efficiency can be difficult to quantify into a specific statistic or number, companies must make sure potential vendors clearly communicate timelines.
Selecting a data centre or colocation provider is a big decision for any business. After all, mission-critical infrastructure will be housed in these facilities. With this in mind, before making a selection there are a number of factors to consider - many of which are equally as important as physical location.
By Steve Hone CEO and Cofounder, The DCA
Making a business decision on the most appropriate level of physical and operational security is not as straight forward as one might think. On the face of it this question appears to be easily answered. If you or your supplier has ISO27001 then its job done and a good night’s sleep is assured!
If only it were that simple.
The existence of a valid ISO27001 certificate for the data centre in question is something I would strongly advocate; however, this only represents one piece of a complex puzzle which needs to be solved if you are to adequately protect your company’s assets.
These days business assets are not the buildings you work in but the treasure that lies within them. Data and information are the new currency of the 21st Century and keeping these assets safely locked away and only accessible by those pre-authorised to polish the silver should be a top priority for all business owners.
Simply ticking an ISO27001 box is not enough, whether you own and operate your own DC or outsource to a 3rd party or cloud provider it is strongly recommended you dig a little deeper to ensure you (and your provider) have the most appropriate security strategy in place to meet your business needs. As my father used to say, “read the small print son, as the devil is in the detail or lack of it”. This is still wise counsel, as time and time again I see examples of reputations that have taken years to build unnecessarily torn down not in days but seconds especially if the security breach is proven to be the result of either complacency or cost savings.
The good news is there are several specialist security consultancy practices you can call upon for help and some of the best are DCA Trade Association members. However, if you wish to do your own homework, finding the information to de-risk may prove easier said than done.
I’m not saying it’s not out there… initial investigations by the DCA have found a massive amount of information on the subject. There lies the problem, with so much information available it can quickly become overwhelming and without a map to guide you through the maze it’s difficult to know where to start, or to decide which is the most appropriate route to take.
This very issue has been recognised by the Physical Access and Cyber Control Security Special Interest Group (SIG) at the DCA. The group has tasked itself with researching and cataloguing as much of the information in circulation as possible with a view to creating a user/reference guide. Additional work is being carried out in tandem on existing threat identification/mitigation recommendations.
Richard Pearman, General Manager at Southco and Chair of the DCA Physical Access and Cyber Control Security SIG confirmed “the objective is not to “reinvent the wheel but to review what’s already out there, to identify then plug gaps and to make it easier for stakeholders to find the information they are seeking”.
The Chairs of all the Special Interest Groups will be meeting again the day before the DCA Data Centre Re-Transformation Conference in Manchester on the 11th September at the Lowry in Media City. This event provides an extremely valuable day hosted by the Trade Association. DCA members, academics and peers come together to discuss the key issues and challenges that are having an impact on service delivery moving forward.
If you what to stay one step ahead of the competition and gain strategic insight into the data centre sector then register to attend this free one day event Register HERE.
To find out if becoming a supporting member of the DCA Trade Associations could benefit your own business please visit the following link https://dca-global.org/membership-levels.
In next month’s journal we take a deep dive into the work of our strategic partners and the collaboration taking place in support of the data centre sector and the consumers who depend on us.
Copy deadline date is the 28th August please contact Amanda McFarlane Amandam@dca-global.org if you would like to submit an article.
By Job Barker, Technical Manager Europe, Chatsworth Products International (CPI)
In a reality where data has become the world’s most valued asset, businesses must ensure their data will be kept safe and uncompromised. As an added challenge, increased migration toward edge compute sites and multitenant data centres (MTDC) has made remote management and physical security a complex task. Businesses selecting a colocation provider want to have the peace of mind that their assets will be strictly secure, while MTDCs must demonstrate their infrastructure is robust and secure from the facility all the way down to the cabinet level, as well as adhere to data privacy regulations.
Recent developments such as the Internet of Things (IOT) and artificial intelligence (AI), have influenced our society to greatly depend on the use of mobile devices to access data instantly. To minimise latency issues, businesses have responded by extending their network away from the traditional centrally located data centres out to the edge.
This hybrid architecture – data centre, MTDC, cloud, edge sites – has put an additional strain on security, which now must be applied across multiple sites.
As well as extending their data to the edge, the urgency for data centre owners to protect data became even more apparent in 2018 with the introduction of the General Data Protection Regulation (GDPR), applicable to businesses operating within the European Union (EU). According to GDPR, any business collecting or processing data for individuals within the EU must demonstrate compliance procedures, having taken appropriate technical and organisational measures to keep their data safe.
While cabinet access control may seem like an obvious part of any security policy, data centre owners need to regularly review and, if necessary, update their security processes to protect their customer’s data assets, in order to demonstrate they are compliant with access policies. Per GDPR, data centres in particular will need to be able to demonstrate examples of preventing unauthorised access to electronic communications networks, malicious code disruption, stopping ‘denial of service’ attacks, and damage to computer and electronic communication systems.
First line of defense: physical security
Because most of the privacy breaches happen in the network, little attention is paid to physical security. For an enterprise-owned, single tenant site, for example, room-level security could be perceived as sufficient. But particularly in MTDCs and remote sites, physical access control at the cabinet level simplifies management and prevents unauthorised users to access the servers and switches in which data is stored.
Most data centre cabinets have keyed locks. But how can organisations ensure the cabinet doors are secure? How do they document access to cabinets? How do they recover keys from users? What is the official response when a key is lost or stolen?
Electronic lock and access control systems automate monitoring, documenting and control of access and allow fast reprogramming if access rights change or if a credential is lost or stolen. These types of control systems support three types of keys:
Something you have – Access Card
Something you know – Keypad Password
Something you are – Biometrics
A comprehensive electronic access control solution can play a vital role in a data centres user access management plan.
Simplifying management with an integrated ecosystem
As the last leg of the power chain, rack power distribution units (PDUs) with intelligent capabilities allows for remote monitoring and switching, asset provisioning and capacity planning, to name a few benefits. Recent PDU advancements feature integration with environmental monitoring sensors and physical access control in hardware. This means power management, environmental monitoring and access control can be handled at once, via a straightforward, easy to use web interface.
Intelligent PDUs are now highly effective at monitoring and managing temperature and humidity through the same web interface, sending notifications to warn data centre managers when they need to take action to ensure service reliability and avoid downtime. Monitoring and detecting smoke, water and motion is also possible.
Regarding security, data centre managers are able to monitor, manage and authorise each cabinet access attempt wherever the cabinet is situated through remote management to the networked EAC locks. Using this intuitive type of interface provides log reports for critical audit trails for regulatory compliance. It also reduces the need for writing the electronic access systems to security panels, eliminating another unnecessary expense.
On the operating expense front, this integrated system allows for a notable reduction in networking costs and deployment times through the ability to network several locks through Secure Array IP consolidation. Secure Array allows up to 32 cabinets or EAC controllers to be networked under only one IP address. This means MTDCs don’t have to pass on unnecessary networking costs to their tenants.
With the continued proliferation of edge computing, it’s now even more essential to ensure intelligent security technologies are rolled out across all location and not just the central data centre. This will not only ensure that today’s more stringent compliance policies and procedures are adhered to by all employees, it will also help safeguard organisations to stay one step ahead of posing threats in the future.
Chatsworth Products Whitepaper :Importance of Cabinet-Level Electronic Access Control for Data Security and Regulatory Compliance
By Professor Ian Bitterlin, Consulting Engineer & Visiting Professor, Leeds University
The need for data centre physical security springs from many places. If you are an enterprise business then you will know immediately how much security you need as it will be aligned to the intrinsic value of the data, or the cost of failing to provide a service., or it could be forced upon you by regulations applied to your industry, i.e. banking or social services. If you are a collocation provider, then you will want to provide a level of physical security that you think will match or exceed the expectations of potential clients. On the other hand, a large global cloud provider may think that that as no client will try to find out ‘where’ the cloud is, other than perhaps in which national boundary, and the need for overt levels of security is reduced – the same applies to the resilience of the IT service provided as ‘cloud’ is often sold on price and it is difficult to suggest that something that is cheap is superior in quality. The extreme example of security, both physical and cyber, is, of course, data centres for military or secret services and these will usually be based on paranoia and reflect spending other peoples’ money without having to justify it other than it all being ‘in the National Interest’.
Then we must consider what we are protecting the data centre ‘against’. It is unlawful entrance (breaking and entering) where the intent is to steal data or hardware that is resides upon? An example of that was the London data centre where the perpetrators broke in through a roof and succeeding in steeling several servers. The suggestion was that the servers contained tens of thousands of sets of personal financial data, but the police later reported that the servers (their high-end, latest generation, microprocessors to be precise) were found being sold on eBay for exotic PC gaming machine self-builders. Or is the intent to damage or disrupt the ICT service? This seems a less likely driver as there is another, more effective, way of achieving that without entering the premises, which we will look at last of all.
Whatever the need following a Standard is not a bad idea, especially if some form of compliance or certification is required, and we do have the security section of EN50600. This is quite simple although, somewhat disappointingly, tells you the principles to meet but does not give practical examples of how to do it. However, the principle is clear. Layers of zones, like the rings of an onion, with the crown jewels in the middle and the exterior property boundary on the outside. There are four layers (separating security zones) and the relationship between them is explored. A potential weakness (which you can simply ignore as being a layer) is that the final barrier can be the lock on the ICT cabinet - but most of us know just how flimsy the typical cabinet doors are and how easy and fast entry could be gained. Then the penetration of each layer is judged in terms of hold-up time against a man or a man with aggressive machine etc. In other words, how long can the layer resist compared to the time that that the police (or other security prepared to physically engage) can respond in force on site. This implies that the boundary between the layers and the zones between them is closely monitored, alarmed and recorded.
As we build each layer some of the paranoia is displayed, but why do I use the term paranoia? There are numerous possible threats that can be mitigated against but have almost never been realised. For example, vehicle traps (bollards) and ram-raid precautions although no data centre has ever been assaulted in that way. For example, the following principles serve as a shopping list of a physical security plan:
A boundary without adjacent roads (no car bomb or mobile ENP risk) or neighbours presenting physical threats (fire, smoke, dust, chemicals etc). A 4m metal fence (3m above and 1m below ground level) topped with a coil of razor-wire. If any access path is easy by a machine, then heavy steel upright columns set in concrete. The fence can have a ditch to act as an additional vehicle trap and can be fitted with vibration monitoring both above and below ground. An access road that has a 90-degree bend at the end so that a high-speed run-up isn’t possible. If possible then CCTV monitoring should be on all local approach roads and security rounds extended to parked cars. All staff and visitor cars must be parked outside this perimeter fence yet still only be accepted with a 24-hour prior appointment that has been checked. A separate entrance for pre-booked personnel with an internal fence to the facility entrance. Deliveries, trucks etc, only by prior appointment as before but through a barriered and trapped holding section before entry is allowed and only accessible to the loading bay. All the exterior fence and the space between the external boundary and the building should be monitored by CCTV, infra-red and motion activated sensors. A dummy system should disguise the active camera and sensor system. The space between can have everything from motion, heat, vibration and tunnelling sensors to dense African thorn hedge planting to slow-down any unwanted visitor. Exterior lighting is essential but infra-red is also recommended.
The external wall should have no fenestration and incorporate a metal Faraday-cage. Ideally of heavy solid construction, or incorporate heavy metal mesh, it should be able to resist a machine. No data rooms should have an external wall as its interior wall so that entry through the wall only leads to the next layer/zone.
This process continues into and around the building, with security access controls (PIN, bio-metric etc) zoned to each person, CCTV, mantraps with weight checks etc. In a perfect world the security entrance would be unmanned (no bullet-proof glass needed or risk of physical threats) with all interface done via intercom, CCTV and a security tray.
Lastly, for physical threats to the facility, we should these days consider the threat from drones. Those that carry a 1kg payload and have a range of at least 1-2km are cheap and pose a considerable threat. The disruption and anonymity (without apprehension) was clearly demonstrated at Gatwick. The threat to a data centre is simple. Multiple drones, each carrying 1litre of acid, flown into the heat rejection coils (aluminium and copper) of the cooling system plant could disable the facility for weeks. Protection, via physical nets, or frequency jamming needs to be planned if that level of security (or paranoia!) is deemed necessary.
And finally, to disable the facility from performing its task, without gaining access or even approaching too close, there is the threat of cutting the fibre links between the facility and the outside world. Maps are available for routes and the fibre-pits are both clearly marked and almost never ‘locked’. Cutting isn’t the fastest, nor the most disruptive, method of disablement – that is probably reserved for fire. A well-timed simultaneous attack using a gerry-can of finest unleaded into each fibre-pit (and there are often diverse routes to be attacked) and the facility is off-line for a considerable time or for an event targeted for that facility. All the security efforts (and paranoia) comes down to an issue of remote connectivity that does not involve intrusion or high risk for the ‘perps’.
By Lexie Gower, Datum
The global data centre market is growing strongly and is expected to show a compound annual growth rate (CAGR) of about four per cent between 2018-2023 (Arizton, 2018). This growth is fuelled by the increasing digitisation of all aspects of our lives and the rise in technologies such as cloud computing, AI and IoT, which generate ever-increasing volumes of data; almost every action we take generates data of some kind. Subsequently, server workloads are growing, which increases the burden on in-house IT facilities.
Yet more pressure is heaped on IT teams because clients, employees and stakeholders expect 24 X 7 X 365 service - downtime is not an option. In light of this need to prevent service interruptions whilst keeping pace with an ever-evolving IT landscape (an evolution that is gaining momentum), it is not surprising that many companies are looking for alternatives to in-house IT facilities for equipment and workloads and ways to reduce their CapEx investment costs whilst still ensuring excellent performance, low latency and flexibility.
Data centres to protect business-critical data
Snowballing data volumes mean that the security of critical data and systems is more important and complex than ever. Security risks come in many forms and businesses find themselves fighting on many fronts. Some IT teams might be focused on implementing disaster recovery strategies, others on protecting data and systems against increasingly sophisticated cyber-attacks. Whatever the focus, lost or stolen data and business downtime damage reputations, alienate customers and incur high recovery costs, so must be prevented wherever possible. Only companies with the deepest pockets can develop and sustain in-house IT facilities that match the security offered by well-equipped and highly secure data facilities, so many are turning to data centres to offer them the security provision they need.
The multi-faceted nature of security threats
Broadly, threats can come in two forms – physical and virtual. On a fundamental level, clients who put their trust in data centres are capitalising on a secure facility in which to h0use their IT estate and workloads. With the best will in the world, most companies realise that their in-house IT facilities couldn’t possibly match the security offered by a good data centre. Mitigating physical security risks includes implementing stringent security processes and protocols to protect the facility from intruders through 24/7 surveillance and permanent security personnel, restricted access through guarded entrances using multi-level access controls, interior and exterior security cameras, perimeter fences and intruder detection alarms – all of this is way beyond the capabilities of most in-house IT facilities.
Physical security goes beyond perimeter fences and gates, however. Those companies looking to data centres to implement their disaster recovery strategies need to be convinced that their IT estate and workloads will be protected from a broad range of disruption scenarios, for example temperature and humidity monitoring, built-in redundant power and cooling, protection against fire, flooding and severe weather. Again, a big ask for anything but the most sophisticated in-house facilities.
Unfortunately, the physical security of IT estate and workloads is only half the battle. We are becoming ever more connected, so attackers are constantly identifying new entry points into our systems and data. That’s why virtual security is becoming more important than ever to ensure that systems and networks are safe from attack - corporate security policies and practices, IP network information security, firewalls and anti-virus to prevent data breaches, continuous monitoring of incidents – the list goes on. Good data centres offer this virtual security as standard and many offer access to a network of specialists for specific requirements.
Understanding your security needs and selecting the right data centre partner
Companies have different security requirements based on their business activities and the sensitivity of the data they handle - a large multinational defence organisation is going to require more sophisticated security provision than a small internet company with no e-commerce facilities on site. So, how should companies choose the right data centre?
Data centre security and resilience can vary widely, and for some businesses, tolerance of unexpected outages can be much higher than for others. Such key selection criteria cannot be left to chance and therefore organisations need to assess the specific tolerance levels for their own particular business. Where this tolerance is extremely low, it is vital to confirm that the data centre offers the highest level of security, highest expected uptime and fully fault-tolerant components and redundancy.
Any data centre worth its salt will have accreditations to demonstrate its capabilities. There are many security-related accreditations, so data centres will select those that they believe are most pertinent and relevant to their business. Examples include ISO 9001 (an international standard that specifies requirements for a quality management system showing the ability to provide products and services that meet customer and regulatory requirements) and ISO 27001 (which demonstrates that companies have an information security management system framework in place).
Choosing the right data centre to house your business-critical IT equipment and workloads can be a daunting task so it’s important to do the necessary research. The DCA Global Data Alliance is a good starting point – it brings together leaders and experts from across the data centre sector and facilitates collaboration and knowledge-sharing in order to enhance best practice. It supports conferences, seminars, webinars and workshops to bring people together and share their experiences.
From data centres housing information for a single organization to co-location data centres where multiple companies are hosting their data in one location, managing physical access at the rack level is becoming a significant challenge for facility managers.
By Mike Fahy, Commercial Product Manager, Electronic Access Solutions, Southco, Inc.
The endlessly growing mountains of personal, private data collected as part of routine transactions in our digital world continue to be a target for cyber criminals, who are moving beyond digital theft to the real world by targeting the servers that contain this data.
In 2017, the global average total cost of a data breach was $3.86 million—up 6.4% from the previous year. As the total cost of data breaches rise, the probability of an organization undergoing a data breach increases to a staggering 27.9%, with cybercrime ranking among the top three risks in the world by the World Economic Forum. These numbers are staggering and grow costlier every day, with data security breaches impacting governments, financial corporations, credit card companies, telecoms and healthcare organizations.
While firewalls, data encryption and antivirus/anti-malware tools handle the logical side of data protection and security, the physical heart of our digital world—also known as the data centre—demands an exceptional level of protection, which can be achieved through a multi-layered approach to access control.
The Risks Keep Growing
As more personal information is pushed into the digital world, the risks and costs of data breaches continue to climb. According to the Breach Level Index, there were 1,765 publicly disclosed data breaches in 2017, leading to the successful theft or loss of 2.6 billion data records. To net it out, that equals approximately 4,949 records stolen every minute, or 82 records every second.
Organizations found in violation of data regulations face costly consequences. This situation dramatically elevates the importance of physical protection and security for data centre managers. As more businesses, governments and organizations move toward cloud-based data storage, regulatory bodies are placing a stronger emphasis on data protection, making it more important than ever for data centre managers to ensure that their security administration meets industry standards.
The Payment Card Industry Data Security Standard (PCI DSS) for instance, is regarded as one the most significant data protection standards in the IT industry today. PCI DSS is designed to protect the personal data of consumers and sets access control requirements for the entities that secure their information.
The regulation calls for monitoring and tracking personnel who might have physical access to data or systems that house cardholder data. This access should be appropriately controlled and restricted. Personnel covered under PCI DSS include full- and part-time employees, temporary employees, contractors and consultants who are physically present on the entity’s premises. The regulation also covers visitors, such as vendors and guests, who enter the facility for a short duration—usually up to one day.
But aren’t most data breaches completed by outside hackers breaking in through firewalls and not by people within an organization? The data says otherwise. In many cases, according to research conducted by IBM, the next attack could be from within an organization.
· In 2015, 60 percent of all attacks were carried out by insiders—either those with malicious intent or those who served as inadvertent actors—by configuring a server incorrectly or leaving a port open on accident.
For the data centre manager, the benefits of compliance are two-fold. Compliance not only protects the confidential nature of the data stored within the data centre, it also protects the data centre from regulatory penalties and the added cost of lost productivity that may occur as a result of a data breach.
Securing Assets with EAS
Managing access to the data centre is becoming more complicated as data housing facilities continue to expand their hosting capabilities. From data centres housing information for a single organization to co-location data centres where multiple companies are hosting their data in one location, traditional key management is becoming a significant challenge for facility managers. Personnel from one or several organizations may access the data centre at any given time, making key management increasingly difficult to track.
Data centres typically have multiple layers of security and access control: at the front door of the building, then a man trap to get past the lobby, then access control to get into each data centre room, then possibly a cage depending on the data centre structure.
However, it is at the rack level where data security and access control have the potential to fall short. If the servers are behind doors, there may not be physical locks securing those doors. And in older server farms, the server racks are wide open to all who have gained access to the cage that surrounds them. Thus, all of the physical layers of security can’t prevent unauthorized or malicious attempts to access unsecured servers. And if there is an attack or data breach, it becomes more difficult to track down the “who, what, when and where” of the breach if there is no rack-level security and audit trail in place.
Southco’s Modular H3-EM Electronic Locking Swinghandle series provides the flexibility to accommodate any reader technology as an integral component of the electronic lock.
In response, data centre managers are focusing on extending physical security down to the rack level. Cabinet manufacturers are transitioning from traditional lock-and-key mechanisms to integrated solutions that combine electronic locking and monitoring capabilities for optimum security. These electronic access solutions (EAS) allow data centre managers to easily incorporate intelligent locking throughout the facility—from its perimeter down to its servers—using the data centre’s existing security system integrating with newer DCIM systems or through a separate, fully-networked system.
The remote monitoring capabilities offered by electronic access solutions help data centre managers quickly identify a violation, enabling them to receive updates on their computer or via text or email on their personal devices. An electronic access solution is composed of three primary components: an access control reader or input device, an electromechanical lock and a controller system for restricting, monitoring and recording access. When designing an electronic access solution, it is important that the appropriate electronic lock is chosen for the specific enclosure and provides the intelligence, flexibility and security needed at the rack level.
Electronic locks are actuated by external access control devices, which validate user credentials and produce a signal that initiates the unlocking cycle. Electronic locks can be combined with any access control device from keypads to radio frequency identification (RFID) card systems, biometrics or wireless systems. The access control device can also be integrated into the electronic lock for a streamlined, integrated solution that requires minimal installation preparations.
Each time an electronic lock is actuated, an electronic “signature” is created which is captured to monitor access–either locally with visual indicators or audible alarms, or remotely over a computer network. The electronic signatures can be stored to create audit trails that can be viewed at any time, whether on- or off-site, to forensically reconstruct a series of access events. This electronic audit trail keeps track of cabinet access activity, including location, date, time, duration of access and specific user credentials.
These audit trails provide data centre managers with an additional resource: They can track the amount of time a server rack door is opened in order to monitor maintenance and service activity. If a server rack is scheduled for activity that should take 30 minutes, but the audit trail shows the door was open for several hours, management can find out why the delay occurred and exercise better management of service personnel and costs for service.
This audit trail can be used to demonstrate compliance with data protection regulations and allows data centre managers to immediately identify and respond to security breaches or forensically reconstruct events leading to a violation. Remote management and real-time monitoring eliminates the need for on-site staffing and reduces costs associated with managing data centre security.
Support for Multifactor Authentication
When designing a new installation or retrofit, it is important to select an electronic lock based on the depth of intelligence and level of protection required.
Many EAS suppliers offer a range of electronic locking solutions designed to make implementing rack-level security relatively simple and cost-effective. These include robust cabinet locks integrated into locking door handles that are self-contained, modular devices designed to provide multifactor authentication in order to supply access to a server cabinet.
Multifactor authentication is a growing requirement for many access control scenarios and more data centre managers are implementing it, particularly for server racks containing highly sensitive data. Common multifactor systems typically require the following factors:
With multifactor authentication, one piece of information alone does not grant access. An electronic lock can be designed to require the user to present an RFID card, and then enter a PIN code on a keypad. There are electronic locking systems that are designed to be modular, allowing different types of access controllers to be easily added to the lock and satisfying the specific level of security for a given server rack.
The levels of safety can be further enhanced in a relatively simple manner. For example, there are electronic locking systems that combine RFID cards and fingerprint readers. Technicians assigned to access a server rack using this type of system have their fingerprint data loaded onto the card. To access the server, they present their card which transmits their fingerprint data to the reader; they then provide their fingerprint to complete access.
Designing for Compliance
Electronic access solutions provide a strong level of physical access control for a variety of data centre security applications, whether providing storage for one organization or several housed in a colocation environment. Managers of colocation environments have started to adopt intelligent locking systems due to the challenges of protecting access to individual cabinets, rather than “caging” a cabinet or group of cabinets into separate areas of the data centre.
Electronic access solutions are adaptable to both structural designs and control mechanisms that are already in place. Often, building access cards or ID badges are already part of an organization’s access control system; using them for rack-level access eliminates the need to create new or separate credentials.
Expectations for data security and management have changed significantly. Regulations are driving facility managers to consider comprehensive security solutions with monitoring capabilities and digital audit trails to protect sensitive information from the threat of unauthorized access and theft. Regulatory requirements related to data security will continue to increase in response to the constantly changing tactics of data thieves.
Data centre managers can prevent these situations from occurring by optimizing security down to the rack level with electronic access solutions. Electronic locks extend intelligent security from existing building security networks to data centre cabinets. As a result, data centre managers can ensure their facilities and equipment are protected against the risk of data breaches and any penalties associated with non-compliance.
By Andy Billingham, Managing Director – EMKA (UK) Ltd
In these days when we recognise that our personal information has real value, we are all concerned about security of our data – in the same way that we would not leave material valuables unguarded, so we are learning the painful lesson that we cannot leave valuable data unguarded. This is about people/customers being happy that their data is safe - and fines for data breaches are going up!
Should we not be looking at the broader picture here – the new build Data Centres, along with the enhancements to existing Data Centres, are bringing in new technology for perimeter and internal building security which is state of the art but are they - in the case of managed hosting and colocation establishments - being let down by their clients – where the racks themselves also call for multi factor biometric systems at the cabinet door.
This brings us to one word which defines our thinking on this, and which is the basis upon which we all place our trust.
1. What is it?
2. What is the cost?
3. How seriously is it being taken?
These are just three questions out of a myriad that should be asked on both sides of the hosting/customer relationship.
In today’s society the everyday person entrusts the data from their mobile devices of all types to be stored by service providers on a global scale, without knowing where this information is and how secure is it – something we are increasingly aware of as the global net/web services are more closely investigated.
It is clear that there are a number of ways to illicitly access this information, either from the cyber perspective via the networks or the physical direct route.
We will try to deal with this from the perspective of physical theft where we see that simple physical removal of data memory devices is often extremely costly, as demonstrated by Health Net’s Security Breach in 2011 which was devastating in scale, with 1.9 million numbers affected and Health Net still facing the consequences from this Physical Security Breach.
So, let us start with the first question – ‘Security’ – What is it?
Which for our purposes we can define as ensuring the safety of an organisation or its information, or the information of its users, against criminal activity such as theft, terrorism or similar, and in this instance held within a defined Data Centre.
Purpose built Data Centres started with fences around the perimeter with external security, employing card access to the Data Centre building itself. Then using man traps of varying types along with pin and card verification through to biometrics based on systems of numerous types. This approach has also been applied to the internal mechanisations within the corridors of the Data Centre and the access to the halls themselves.
This could be justly said to be good enough. You have protected your outer perimeter and you have only given permission to certain people into the halls.
Nonetheless, according to the most recently studies, only about 20% of data centres are secure, leaving an overwhelming 80% at risk despite pretty much every one using the perimeter protective measures outlined above.
What happens once these people are inside the halls?
At this point we can be confident that the Data Centre has done everything reasonably possible to prevent the wrong type of person gaining access to the halls by application of a number of stringent checks, but now we must factor in human nature.
A large proportion of Data Centres nowadays operate on a co-location basis which means that there are a lot of different companies using cabinets within the same halls, maybe within the same cabinets, which in turn means a lot of different engineers seeking access to neighbouring space.
The cabinets inside these halls have handles with locks on them – but generally these are common often fragile industrial locks with only a limited number of key combinations, which makes them vulnerable to all the practical limitations associated with common types of mechanical key systems.
In this case the hosting company has delivered a physically secure building but the residents within the halls have left their data (data which belongs in some cases to themselves as companies and in other cases to the general public) within these cabinets, potentially open to a breach, which is where human nature comes into it and suggests the people involved as the weakest link in the chain, because this is where the breach can occur.
In order to try and alleviate the risk of an unwarranted breach due to human interference we have many ways to protect a cabinet apart from the standard key lock.
Card – either low or high frequency – however these can be in some instances be cloned or can be stolen or loaned.
Pin code – but someone can see over your shoulder or copy it if written down, or guess it if set to a common simple series.
Card and pin code – which can in some instances be cloned or can be stolen or loaned as well as the pin code divulged or seen.
Fingerprint – a pattern algorithm which cannot be replicated and is indisputable (dependent upon vendor).
Card and fingerprint - this is a fingerprint template on card with dual verification and is indisputable – it cannot be replicated – dependent upon vendor). In times when credit cards are soon expected to include fingerprint analysis for three-factor authentication on every single transaction with chip, pin and fingerprint.
For our purposes, in a secure data centre we can employ pin, card, fingerprint at the cabinet door and then multiply this using the 4 eyes principle which adds in one person to verify another. With two people in this way gives 7 factors of verification.
All Data Centre installations are called to operate under the umbrella of a range of primary regulations, which include strict coverage of physical theft as in:
PCI DSS Requirements 9 and 9.1: “Any physical access to data or systems that house cardholder data provides the opportunity for individuals to access devices or data and to remove systems or hardcopies, and should be appropriately restricted. Use appropriate facility entry controls to limit and monitor physical access to systems that store, process, or transmit cardholder data.”
• HIPAA Title II, Physical Safeguards: “Access to equipment containing health information should be carefully controlled and monitored. Access to hardware and software must be limited to properly authorised individuals.”
• FISMA (FIPS 200 Section 3): “Organizations must limit physical access to information systems, equipment, and the respective operating environments to authorised individuals.”
This level of security for cabinets has been available within the Data Centre market from a small number of vendors for at least the past three years.
Perhaps belatedly the whole data security industry is now encompassing physical threats front and centre in their strategy whereby cabinet security from EMKA at the cabinet door within the Data Centre arena would seem to be leading the way at the moment, across multiple business sectors - with numerous trials taking place by major institutions into varying uses for biometric access solutions. For example, the financial sector is currently trialling pin and fingerprint for debit and credit cards, while airports and border control agencies are doing similar trials encompassing the full range of sensitive personal information on people at all levels of corporate and public life.
To answer the second question:
‘Security costs money’. The last line of defence in this instance is the lock on the cabinet. Your information inside a cabinet stored on a server is invaluable. What locks do you use to secure your house – would you say that they are as good as you can afford and that an insurance company approves? If you “get what you pay for” did you pay a few pounds, which might slow down a thief a bit or are you looking to be confidently safe – which will likely cost significantly more.
This brings us back to the third question from the beginning of this article:
To answer the third question. How seriously is it taken – or more to the point “how seriously do you take it?”
I would suggest you ask yourself and make up your own mind, whether as an operator of a Data Centre or as a cohabiter who is storing the information entrusted to you - and to you as people who are trusting in the providers – does your physical security go from the facility gate right to the cabinet door handle – have you closed off all foreseeable opportunities – have you protected the last line of defence – the last point of entry?