‘CIOs and other IT leaders need to be prepared to embrace the hybrid IT as the best platform for modern IT departments, but on a level that suits their own business, while finding the best approach to monitoring and managing its usage.’
Edge computing is the critical enabler of cloud-based applications and is the key to providing the fast processing speeds necessary to take advantage of internet of things (IoT) applications. There’s no denying the need for the cloud, but those businesses who restrict themselves to its capabilities could be wasting their investment, not to mention enduring detrimental inefficiencies.
Hybrid ultimately introduces complexity and complexity impacts people, process and technology in different but equally detrimentally ways.
The ability to collect and analyse data at the edge fundamentally changes the way IoT can be leveraged – and provides the opportunity for IoT deployments to, finally, realise their business goals.
Edge computing design requires several new considerations. These include: low computing footprint, low power footprint, AI workloads, reliable and predictable networking, distributed compute and storage.
Moving data and applications to the edges of a network narrows the distance between users and data, resulting in improved speed, reliability and efficiency. This allows data centres to generate, process and analyse large volumes of data in a fast, stable and consistent manner, while businesses are able to maintain operational run-time effectively.
The quotes above, taken from some of the contributions to our major edge/Hybrid IT focus in this issue of Digitalisation World, provide a neat cross-section of reasons as to why and how traditional data centre and IT infrastructure is having to adapt and co-exist with newer technologies and topologies. As I’ve written before, ‘hybrid’ is the single most important adjective in the technology world right now. Whether it’s hybrid data centre infrastructure – in-house, colo, hosting/cloud; centralised, regional and local/edge facilities; hybrid IT infrastructure – in-house, colo, cloud and managed services; or hybrid clouds – private, public, with multi-cloud being the latest trend.
The driving force behind the hybrid world, and the emergence of the edge/edge computing, is the ongoing quest to optimise provider/customer interaction, by providing the best possible service, wherever, whenever and however the customer demands. And here we have another hybrid world, where there’s a gradual blurring of the lines between the world of work and leisure (I would say downtime, but that might be confusing!). Increasingly, user experiences and service as delivered into our homes are being sought after in the work place. If we can download and start using an application at home in a matter of seconds or minutes, why should we have to wait days, weeks or even months to be able to access a new application at work?
Dynamic, scalable, flexible, always available and real time are the catch words which are becoming the everyday currency of the digital world. And the edge, much talked about, and now being implemented, is an important part of the infrastructure change required to deliver optimised performance levels to the customer. It’s not the be all and end all, but just one of the newest tools in the Hybrid IT box.
Hopefully, the content of this issue of Digitalisation World will give readers more understanding of how the IT world is evolving and, crucially, how it can make a difference to any business.
Happy reading!
Successful businesses will take advantage of new set of technologies but prioritize trust, responsibility, privacy and security.
The enterprise is entering a new “post-digital” era, where success will be based on an organization’s ability to master a set of new technologies that can deliver personalized realities and experiences for customers, employees and business partners, according to Accenture Technology Vision 2019, the annual report from Accenture that predicts key technology trends that will redefine businesses over the next three years.
According to this year’s report, “The Post-Digital Era is Upon Us – Are You Ready for What’s Next?,” the enterprise is at a turning point. Digital technologies enable companies to understand their customers with a new depth of granularity; give them more channels with which to reach those consumers; and enable them to expand ecosystems with new potential partners. But digital is no longer a differentiating advantage ― it’s now the price of admission.
In fact, nearly four in five (79 percent) of the more than 6,600 business and IT executives worldwidethat Accenture surveyed for the report believe that digital technologies ― specifically social, mobile, analytics and cloud ― have moved beyond adoption silos to become part of the core technology foundation for their organization.
“A post-digital world doesn’t mean that digital is over,” said Paul Daugherty, Accenture’s chief technology & innovation officer. “On the contrary ― we’re posing a new question: As all organizations develop their digital competency, what will set YOU apart? In this era, simply doing digital isn’t enough. Our Technology Vision highlights the ways in which organizations must use powerful new technologies to innovate in their business models and personalize experiences for their customers. At the same time, leaders must recognize that human values, such as trust and responsibility, are not just buzzwords but critical enablers of their success.”
The Technology Vision identifies five emerging technology trends that companies must address if they are to succeed in today’s rapidly evolving landscape:
·DARQ Power: Understanding the DNA of DARQ. The technologies ofdistributed ledgers, artificial intelligence, extended reality and quantum computing (DARQ) are catalysts for change, offering extraordinary new capabilities and enabling businesses to reimagine entire industries. When asked to rank which of these will have the greatest impact on their organization over the next three years, 41 percent of executives ranked AI number one — more than twice the number of any other DARQ technology.
·Get to Know Me: Unlock unique consumers and unique opportunities. Technology-driven interactions are creating an expanding technology identity for every consumer. This living foundation of knowledge will be key to understanding the next generation of consumers and for delivering rich, individualized, experience-based relationships. More than four in five executives (83 percent) said that digital demographics give their organizations a new way to identify market opportunities for unmet customer needs.
·Human+ Worker: Change your workplace or hinder your workforce. As workforces become “human+” — with each individual worker empowered by their skillsets and knowledge plus a new, growing set of capabilities made possible through technology — companies must support a new way of working in the post-digital age. More than two-thirds (71 percent) of executives believe that their employees are more digitally mature than their organization, resulting in a workforce “waiting” for the organization to catch up.
·Secure Us to Secure Me: Enterprises are not victims, they’re vectors. While ecosystem-driven business depends on interconnectedness, those connections increase companies’ exposures to risks. Leading businesses recognize that security must play a key role in their efforts as they collaborate with entire ecosystems to deliver best-in-class products, services and experiences. Only 29 percent of executives said they know their ecosystem partners are working diligently to be compliant and resilient with regard to security.
·MyMarkets: Meet consumers at the speed of now. Technology is creating a world of intensely customized and on-demand experiences, and companies must reinvent their organizations to find and capture those opportunities. That means viewing each opportunity as if it’s an individual market—a momentary market. Six in seven executives (85 percent) said that the integration of customization and real-time delivery is the next big wave of competitive advantage.
According to the report, innovation for organizations in the post-digital era involves figuring out how to shape the world around people and pick the right time to offer their products and services. They’re taking their first steps in a world that tailors itself to fit every moment — where products, services and even people’s surroundings are customized and where businesses cater to the individual in every aspect of their lives and jobs, shaping their realities.
One company taking individualization and customization to a new level is Zozotown, Japan’s biggest e-commerce company. Its skintight spandex Zozosuits pair with the Zozotown app to take customers’ exact measurements; custom-tailored pieces from the company’s in-house clothing line arrive in as few as 10 days. And it’s not just in the fashion industry where technology is enabling customization previously not possible. U.S. retailer Sam’s Club developed an app that uses machine learning and data about customers’ past purchases to auto-fill their shopping lists; the company plans to add a navigation feature to show optimized routes through the store to each item on that list.
The report notes that companies still completing their digital transformations are looking for a specific edge, whether it’s innovative service, higher efficiency or more personalization. But post-digital companies are out to surpass the competition by combining these forces to change the way the market itself works — from one market to many custom markets — on-demand and in the moment, just as Chinese e-retail platform JD.com is doing with its “Toplife” platform. The service helps third parties sell through JD by setting up customized stores, providing access to its supply chain with cutting-edge robotics and drone delivery. In partnership with Walmart, a physical store in Shenzhen will offer more than 8,000 products available in person or delivered from the store in under 30 minutes. By offering unprecedented customization and speed, JD is empowering other companies while creating a new market for itself.
MuleSoft has published the findings of the 2019 Connectivity Benchmark Report on the state of IT. Based on a global survey of IT leaders, the report reveals that while 97 percent of organizations are currently undertaking or planning to undertake digital transformation initiatives, integration challenges are hindering efforts for 84 percent of organizations. Close to half of all respondents (43 percent) reported more than 1,000 applications are being used across their business, but only 29 percent are currently integrated together, trapping valuable data in silos.
The survey of 650 respondents also reveals that IT is struggling to keep up with business demands, as 64 percent of respondents indicated they were unable to deliver all projects last year. In addition, project volumes are only expected to grow, with respondents predicting on average a 32 percent increase this year. If digital transformation initiatives aren’t successfully completed, nine out of ten organizations believe business revenue will be negatively impacted.
Among the key results of the survey:
The IT delivery gap is widening as new technologies enter the scene
The role of IT is changing from a tactical function to a business catalyst. However, the business’ growing need for IT support is reflected in the increasing number of projects IT is expected to deliver. In addition, with a growing investment in new technologies, organizations are seeing integration challenges hinder digital transformation initiatives.
IT’s new role as a business catalyst
IT’s expanding role is driven by a greater need for support across lines of business. As companies race to digitally transform, what was once an IT-specific need for integration has now expanded to business units across the organization.
Preparing for the future one API at a time
For IT to become a business enabler, organizations are increasingly turning to API strategies to support reuse and self-service. By creating reusable assets, IT enables the business to increase overall delivery speed and capacity.
“Today, businesses are competing on speed and agility as they race to meet customer expectations. As a result, every organization is undergoing digital transformation in order to offer a completely connected customer experience. It has put integration in the spotlight as a top-level business priority,” said Greg Schott, CEO, MuleSoft. "With the IT landscape only growing more complex, organizations can build their applications networks one API at a time, providing businesses with an agile foundation for success in the digital era.”
"As organizations across all industries digitize their business models, the ability to connect and reuse technology assets becomes a critical capability,” said Steve Stone, technology advisor and former CIO of several Fortune 500 brands including Lowe’s and L Brands. “Reusable APIs serve as building blocks in the application network, enabling new business models and simplifying the expansion of a connected partner ecosystem. MuleSoft’s Connectivity Benchmark Report demonstrates the importance of adopting a comprehensive API strategy in driving desired business outcomes."
Two-thirds of organisations bypass IT when buying new technologies for digital transformation, based on an EIU report sponsored by BMC Software.
According to a new report released by The Economist Intelligence Unit (EIU), organisations and their IT teams are not in sync when pursuing their digital transformation strategies. The report, From gatekeeper to enabler: the role of IT when digital transformation is the norm, sponsored by BMC Software, shows a prime example of this disconnect. Two-thirds of private and public-sector organisations in a survey (66%) say they buy new systems and solutions without involving IT teams—a situation that flies in the face of IT’s traditional role as a gatekeeper of new technologies.
The findings are based on a survey of senior executives and administrators in Asia-Pacific, Europe, Latin America and North America. Reasons for the shortfall in collaboration with IT departments on digital transformation initiatives include:
·misalignment in objectives, with non-IT teams prioritising revenue growth and reducing costs, in contrast to IT teams that typically prioritise integration within existing systems and overall security; and
·time pressures, as demonstrated by the finding that 37% of respondents cite excessive length of the procurement process for the failure to consult IT teams on the purchase of new technologies.
Nonetheless, despite many companies saying they bypass IT when purchasing new technology, 43% of respondents say their IT teams are still accountable if something goes wrong with a digital transformation initiative. This can be risky if IT teams have not evaluated the technologies in the first place.
This apparent lack of collaboration appears counterintuitive, given the generally positive view of respondents towards the benefits of co-ordination between IT and non-IT teams. Notably, organisations in which IT and non-IT teams collaborate regularly are significantly more confident about overcoming digital transformation challenges. Eighty-nine percent of collaborators say they are confident about overcoming obstacles compared with 55% of non-collaborators.
Another hindrance to seeing the results of digital transformation can be time itself. For organisations who have only had their initiatives in place for one or two years, only 42% strongly agree their organisation is realising the benefits of digital transformation. This is much lower than the 63% of respondents who have had their initiatives in place for three or more years.
Kevin Plumberg, editor of the report, says: “Digital transformation is not a one-off, unique journey that some organisations are experimenting with. It has become the norm, and companies where IT teams are working closely with the business rather than in silos are better positioned to manage the challenges that inevitably arise.”
Chief Information Officers (CIOs) increasingly see IT moving from a ‘cost center’ to a ‘trust center’ – even as the challenges they face abound, according to the 2019 CIO Survey from Grant Thornton LLP in partnership with the Technology Business Management (TBM) Council.
CIOs who participated in the survey reported that “creating and driving an IT strategy that aligns with overall business/agency objectives” is one of their top priorities – second only to “ensuring that IT systems comply with security and regulatory requirements.”
This focus on strategy is also evident in how CIOs see the institutional role of their IT teams. Of those surveyed:
·Eighty-one percent believe “IT drives innovation or modernization programs.”
·Seventy-five percent think “IT has a voice in business/agency strategy and strategic initiatives.”
·Sixty-six percent of CIOs say their performance should be measured based on “successful execution against strategy and plans.”
“The days of the CIO serving strictly as an IT operator are over,” said LaVerne H. Council, national managing principal for Enterprise Technology Strategy and Innovation at Grant Thornton. “CIOs see themselves as trusted business partners, but the road ahead is not an easy one. CIOs should articulate the value of IT spend in the same terms measured by their business partners.”
Demonstrating value will only grow harder in the face of the challenges that CIOs identified in the survey. Chief among these is “conflicting priorities among stakeholders” – followed by “stakeholders’ resistance to change;” “recruiting and retaining talent;” “aligning IT with business goals;” and “articulating the value of IT spend.”
The road to becoming a trusted business partner
The clearest path forward for CIOs to become trusted business partners is to demonstrate that they can control costs and communicate IT value in a way that resonates with the business. Through technology business management (TBM), for example, leaders can help their C-suite peers understand how IT brings value to their organization.
“With TBM, CIOs and their teams use a data-driven financial framework to evaluate investment decisions using a common language that aligns IT spend to business value,” said Todd Tucker, vice president and general manager of the TBM Council. “With this information, organizations can enable prioritization, optimize business costs and accelerate decision-making. In fact, 74 percent of survey respondents identified ‘the ability to shift spending to innovation or growth as the most important benefit of TBM.’”
Shifting priorities
Finally, CIOs are shifting their priorities to meet emerging needs and address critical gaps, most notably:
·Eighty-five percent are investing in automation software deployments over the next two years.
·Eighty-three percent have increased spending on cybersecurity.
·Only 30 percent are currently using data to “move from information to insight.”
·They believe the top two barriers to cybersecurity threats are “retaining top-tier talent” and “the increasing sophistication of threats.”
·They think artificial intelligence will be the most impactful area of IT over the next three to five years.
Grant Thornton and the TBM Council conducted the survey in fall 2018, based on responses from IT leaders in both commercial businesses and the public sector.
74% of businesses that use IoT say that non-adopters will have fallen behind rivals within five years.
Vodafone has published the findings of its latest IoT Barometer. Surveying 1,758 businesses worldwide, the Barometer finds that more than a third (34%) of businesses now use IoT, that 70% of these adopters have moved beyond pilot stage and 95% of adopters are seeing the benefits of investment in this technology as it moves into the mainstream.
While use cases for IoT are varied, ranging from medical exoskeletons to connected tyres, the research has found that IoT impacts businesses regardless of size and sector. Sixty per cent of businesses that use IoT agree that it has either completely disrupted their industry or will do so in the next five years. Eighty-four per cent of adopters report growing confidence in IoT, with 83% enlarging the scale of deployments to take advantage of full benefits.
The report also grades businesses in IoT usage by assessing strategy, integration and implementation of IoT deployments. Globally the report found that 53% of adopters fall into the top two levels out of five. Regionally, the Americas is the most advanced, with 67% of adopters falling into the top two levels, compared to 51% in APAC and 46% in Europe. This suggests that businesses in the Americas are progressing faster than those in other markets, moving from individual projects to coordinated, strategic programmes.
The most advanced companies also saw the greatest return on investment in IoT. Eighty-seven per cent of those in the top level reported significant returns or benefits from IoT, compared to just 17% in the “beginner’s” level. These benefits breed increasing reliance on IoT. Seventy-six per cent of adopters say IoT is mission-critical. Some are even finding it hard to imagine business without it — 8% of adopters say their “entire business depends on IoT”.
Stefano Gastaut, CEO IoT, Vodafone Business, commented: “IoT is central to business success in an increasingly digitised world, with 72% of adopters saying digital transformation is impossible without it. The good news is that IoT platforms make the technology easier to deploy for businesses of all sizes and NB-IoT and 5G will improve services and potential. In this climate, companies need to be considering not if but how they will implement IoT, and they must also be fully committed to the technology to realise the strongest benefit.”
Looking to the future, new technology will continue to power the performance of IoT. Over half (52%) of adopters plan to use 5G, which promises to support higher volumes of data, increase reliability and offer near-zero latency. Combined with mobile edge computing, which will process application traffic closer to the network edge, users can expect better performance, less risk and faster data speeds.
Commenting on the results, Michele Mackenzie, Principal Analyst at Analysys Mason said: “The Barometer makes it clear that businesses are increasing their investment into IoT as they gain confidence and begin to develop more advanced solutions. In the short term, users of IoT will continue to access reduced costs and improved efficiency, but increasingly ambitious projects will offer the opportunity to change business models. For example in cities heavy users of roads could pay more, encouraging the use of different modes of transport with knock-on benefits to public health and the environment.”
Progress has published the results of its 2018 Data Connectivity Survey. In the fifth annual survey, more than 1,400 business and IT professionals in various roles across industries and geographies shared their insights on the latest trends within the rapidly changing enterprise data market. The findings revealed five data-related areas of primary importance for organisations as they migrate to the cloud: data integration, real-time hybrid connectivity, data security, standards-based technology and open analytics.
Significant findings from the survey include:
·Data integration has become the #1 challenge, with nearly 50% of respondents pinpointing ever-increasing disparate data sources as a major pain point.
·44% of respondents are worried about integrating cloud data with on-premises data making real-time hybrid connectivity critical.
·Increased data security vulnerabilities, penalties and regulations are creating new challenges and opportunities for data integration. More than 65% of survey respondents said they must comply with one or more standards.
·Standards-based access is growing in popularity as the number of data sources continues to grow at a rapid pace. Over 50% of survey respondents are currently using ODBC and/or REST.
·REST APIs have become the standard framework for application integration. An impressive 65% of respondents are opting for REST/web APIs for databases.
·The concept of open analytics is on the rise as organizations seek to query cloud applications with their favorite analytics tool or programming language. On average, organizations are using 2.5 different BI reporting tools, underscoring the need for universal BI connectors to support a variety of needs.
·Relational databases are still critical for many enterprises, as SQL Server (55%), MySQL (40%) and Oracle (37%) are still in active use for most of the businesses surveyed.
“Enterprises are looking to release the power of their data for competitive advantage, while upholding the highest standards for data security,” said John Ainsworth, SVP, Core Products, Progress. “Disparate data sources, migration to the cloud, self-service BI demands, and increasing government regulations create challenges for every organization. Progress DataDirect helps customers to overcome these challenges and create the next generation of powerful data solutions and business applications.”
Ready availability of hacking tools, wildfire spread of malware and proliferation of cryptomining has seen social media-enabled cybercrimes grow by more than 300-fold.
Bromium has published the findings of an independent academic study into cybercriminals’ increasingly aggressive exploitation of social media platforms. The report details the range of techniques utilized by cybercriminals to exploit trust and enable rapid infection across social media. It also details the range of services being offered in plain sight on social networks, including: hacking tools and services, botnets for hire, facilitated digital currency scams and more.
The findings come from ‘Social Media Platforms and the Cybercrime Economy’, an extensive six-month academic study sponsored by Bromium and undertaken by Dr. Mike McGuire, Senior Lecturer in Criminology at the University of Surrey. The study is the next chapter of ‘Into the Web of Profit’ and examines the role of social media platforms in the cybercrime economy. Key insights include:
“Social media platforms have become near ubiquitous, and most corporate employees access social media sites at work, which exposes significant risk of attack to businesses, local governments as well as individuals,” commented Gregory Webb, CEO of Bromium. “Hackers are using social media as a Trojan horse, targeting employees to gain a convenient backdoor to the enterprise’s high value assets. Understanding this is the first step to protecting against it, but businesses must resist knee jerk reactions to ban social media use – which often has a legitimate business function – altogether.
“Instead, organizations can reduce the impact of social media-enabled attacks by adopting layered defenses that utilize application isolation and containment,” concludes Webb. “This way, social media pages with embedded but often undetected malicious exploits are isolated within separate micro-virtual machines, rendering malware infections harmless. Users can click links and access untrusted social-media sites without risk of infection.”
Cryptomining and digital currency scams
Since 2017 there has been a 400 to 600 percent increase in the amount of cryptomining malware being detected globally, the vast majority of which has been found on social media platforms. Of the top 20 global websites that host cryptomining software, 11 are social media platforms like Twitter and Facebook. Apps, adverts and links have been the primary delivery mechanism for cryptomining software on social platforms, with the majority of malware detected by this research mining Monero (80 percent) and Bitcoin (10 percent), earning $250m per year for cybercriminals.
“Facebook Messenger has been instrumental in spreading cryptomining strains like Digmine,” said Dr. Mike McGuire, Senior Lecturer in Criminology at the University of Surrey. “Another example we found was on YouTube, where users who clicked on adverts were unwittingly enabling cryptomining malware to execute on their devices, consuming more than 80 percent of their CPU to mine Monero. For businesses, this type of malware can be very costly, with the increased performance demands draining IT resources, network infections and accelerating the deterioration of critical assets.”
In addition, social platforms have become increasingly important to the business of digital currency scams involving fraudulent crypto-currency investments. “One trend on social media has been the hijacking of trustworthy verified accounts,” continued Dr. McGuire. “In one case, hackers took over the Twitter account for UK retailer Matalan and changed it to resemble Elon Musk’s profile. Tweets were then sent out asking for a small bitcoin donation with the promise of a reward. Safe to say, nobody who donated got anything in return.”
Social media in the middle of a chain exploitation and malicious malware attacks
The report found crimeware tools and services widely available on social media platforms. Up to 40 percent of inspected social media sites had a form of hacking service offering hackers for hire, hacking tutorials and tools to help hack websites. Social media platforms also enable an underground economy for the trading of stolen data, such as credit card details, earning cybercriminals $630m per year.
“Social platforms and dark web equivalents are becoming blurred, with tools, data and services being offered openly or acting as a marketing entry-point for more extensive shopping facilities on the dark web,” said Dr. McGuire. “One account on Facebook offers the opportunity to trade or learn about exploits and advertises on Twitter to attract buyers. We also found evidence of botnet hire on YouTube, Facebook, Instagram and Twitter, with prices ranging from $10 a month for a full-service package with tutorials and tech support to $25 for a no-frills lifetime subscription – cheaper than Amazon Prime. For the enterprise, this raises a very real concern that the ready availability of cybercrime tools and services make it much easier for hackers to launch cyberattacks.”
Social media platforms have become a major source of malware distribution. The research found that up to 40 percent of malware infections on social media come from malvertising, and at least 30 percent come from plug-ins and apps, many of which lure users in by offering additional functionality or deals. Once the user clicks, the malware executes – allowing hackers to steal data, install keyloggers, deliver ransomware, persist and hide for future attacks and so on. The spread of malware is facilitated by large user bases and the fact that many social media sites share user profiles across platforms, enabling “chain exploitation”, whereby malware can spread across multiple social media sites from one account.
“While adverts on Facebook or Instagram may look like they’re promoting Ray-Ban sunglasses or Nike shoes, they’re often more sinister and deliver malware once clicked,” explained Dr McGuire. “Cybercriminals have been quick to see how the social nature of such platforms can be used to spread malware. They imbed malware into posts or friends’ updates and use photo tag notifications to persuade users to open infected attachments.”
Social media enabling traditional crime
Social media platforms are also hosting a thriving criminal ecosystem for more traditional crime. They serve as a recruitment center for money mules used for laundering, with posts or adverts offering opportunities to earn large amounts of money in a short time. “As we saw in the previous report, platform criminality extends beyond cybercrime, with traditional crime also being enabled by platforms,” said Dr. McGuire. “These platforms have brought money laundering to the kind of individuals not typically associated with this crime – young millennials and generation Z. Data from UK banks suggests there might be as many as 8,500 money mule accounts in the UK owned by individuals under the age of 21, and most of this recruitment is conducted via social media.”
The illegal sale of prescription drugs is netting criminals $1.9B per year. The report also found a large amount of drugs like cannabis, GHB and even fentanyl being sold on Twitter, Facebook, Instagram and Snapchat. Social media is enabling a wide variety of financial and online romance fraud. “Around 0.2 percent of social media posts examined for this report involved financial fraud, helping to generate $290m in revenue per year,” concluded Dr. McGuire. “Criminals have been quick to understand how to exploit social media to facilitate more traditional crime, whether it’s a vehicle to sell something or research potential victims – for instance, online dating scams generate $138m per year and often rely on using social media pages to trick people.”
Cyber security revenues in 2018 were $160.2 billion and will jump an enormous $11.2 billion during 2019, as the focus moves to GDPR adherence and adherence to similar legislation. Growth slows to around $9.8 billion per annum after this but then spikes once again in 2023/4 as AI based Cybersecurity escalates, reaching $223.7 billion.
This means cybersecurity spending will rise faster than total IT budgets over the next five years. The European Union’s GDPR (General Data Protection Registrar) has set the agenda for legislation over data privacy and protection worldwide and that is generating a spike in spending on security measures that ensure compliance. This will continue to ripple around the world between 2019 and 2021.
Later in our forecast period an arms race will develop around AI and machine learning as major cybercriminal gangs and rogue nation states adopt these to launch increasingly sophisticated cyberattacks, pushing spending on countermeasures.
These predictions are made in Riot’s new report “Privacy and state espionage tightens focus on security - Cybersecurity spending forecast to 2024” released today. This is a global forecast by geography and industry vertical, which also highlights differences in cybersecurity strategy and spending between major regions and sub-sectors.
North America is expected to continue to spend the most on security (27%), but both Europe (22%) and China (20%) which are rapidly accelerating their spend, with the rest of Asia following closely behind on 16%. North America is expected to lead on almost every market with the exceptions of Industrial and Automotive, where China leads, but only by a tiny margin.
Because the US has been driving the eHealth revolution, it has invested sooner than other countries in associated security and that shows up clearly in the 2018 regional breakdown. That gap will remain over the forecast period but narrow as other regions catch up over eHealth.
By contrast in automotive China is emerging as a big spender on cybersecurity, driven by huge investment coupled by a strategy of focusing on safety in autonomous driving, contrasting with the country’s cavalier approach to consumer privacy. Automotive also stands out for having by far the fastest growth in spending on cybersecurity among the vertical sectors covered in the report.
The Riot report describes how cybersecurity threats will evolve over the next five years as monitoring and surveillance based on machine learning algorithms and other AI techniques become widely deployed. But this will have the affect of increasing rather than reducing demand for skilled cybersecurity personnel, because the developing arms race will require experts in what will effectively be a new form of war game.
Cybercriminals will not only attempt to cover their tracks to evade detection by surveillance systems, but will also attack these defenses directly, as is already happening to forensic watermarking systems in TV services.
Seventy-five percent of organisations have expressed concerns about bot traffic (bot robots and scrapers) stealing company information, despite the same number already deploying a bot traffic manager solution, according to new research from the Neustar International Security Council (NISC).
Worried about the theft of digital content, inventory, pricing and other proprietary information, these fears come at a time where the level of concern among cybersecurity experts has almost doubled year-on-year.
The mounting concerns are evidenced in the Cyber Benchmark Index, which is a reflection of the current international cybersecurity landscape. At the start of this year, the index hit the highest ever rating (19.4) since NISC began mapping threat levels in May 2017. During the same period in 2018, the cyber benchmark index only reached 10.5.
Aside from bot traffic, security professionals perceived DDoS attacks to be the highest threat to their enterprise, with over half ofrespondents (52 percent) admitting to being on the receiving end of an attack. This was followed by system compromise, ransomware and financial theft.
However, despite DDoS ranking as the greatest overall danger to businesses, generalised phishing attacks were seen to be the most increasing concern. When considering where these threats might come from, security professionals viewed the world at large to be the biggest worry – a 60 percent rise from the previous reporting period.
“Fears around bot traffic and bot-powered DDoS attacks are extremely valid but by no means new,” said Rodney Joffe, Head of the NISC and Neustar Senior Vice President and Fellow. “However, with the rapid rise of the Internet of Things – whether that be across smart cities, banking or a nation’s critical infrastructure – the ability for bots to cause havoc at a global level has increased significantly. Without the appropriate detection, data scrubbing and mitigation tools in place, IoT devices have the potential to become part of a malicious botnet, whereby hackers essentially weaponise these devices to launch more powerful DDoS attacks. Worryingly, as more and more devices continue to connect to the Internet, these types of attack pose an increased risk to not only the defences of an enterprise, but also to a whole nation.”
“Unfortunately, bot traffic makes up a large proportion of the Internet,” continued Joffe. “So it is key that organisations make sure incoming data is scrubbed in real-time, while also identifying patterns of good and bad traffic to help with filtering. While it is encouraging to see that more organisations are implementing bot traffic manager solutions, it is imperative that businesses employ a holistic protection strategy across every layer for the best level of protection. Implementing a Web Application Firewall (WAF) is crucial for preventing bot-based volumetric attacks, as well as threats that target the application layer.”
Cloud Industry Forum and Ingram Micro Cloud find SMEs are moving towards next generation technologies, though skills shortages and security concerns are hindering progress.
Although UK SMEs are increasingly convinced about the opportunities presented by next generation technologies, such as AI, IoT and blockchain, many are struggling to incorporate them into their technology roadmaps. This is the key finding from a new research report by the Cloud Industry Forum (CIF) and Ingram Micro Cloud, who warn that the channel must do more to help drive digital transformation within the SME community.
The research, which was conducted by Vanson Bourne, sought to understand how far along UK-based SMEs are on their digital transformation journeys and the challenges that are confronting them along the way. It found that around four in ten respondents had already invested in AI (39%), blockchain (46%) or IoT (43%) to some extent and that a similar proportion thought these technologies would be critical or very important for their organisations over the next five years.
However, despite this enthusiasm, the vast majority identified barriers to their organisation’s digital transformation efforts. Just 29% of SMEs have a formulated digital transformation strategy in place, indicating a lack of clear guidance and leadership, 45% lacked digital transformation skills, and almost two third (64%) struggled with security.
For CIF and Ingram Micro Cloud, these findings highlight the opportunity for channel partners who must accelerate their own transformation plans if they are to capitalise on the growing demand for next generation technologies.
Commenting on the findings, Scott Murphy, Cloud and Advanced Solutions Director at Ingram Micro, said: “The findings from the research indicate that SMEs are increasingly aware of the need to embrace a modern workplace and that digital transformation is at the forefront of their business strategies. With this in mind, SMEs are leveraging cloud infrastructure to explore a range of next generation technologies, and are, in fact, further along that road than large enterprises. However, it’s clear that many SMEs don’t have the skills, guidance, and support to safely transform their businesses, and, as such, it’s critical that the channel can step up to the mark and better support their transformation efforts.”
Alex Hilton, CEO of CIF, added: “SMEs are clearly aware of the need for digital transformation and are enthusiastic about the opportunities new digital technologies can deliver. But equally clear is that they’re not getting enough assistance from the channel to see it through. Given that many channel partners are still at the very early stages of their own transformations, and still learning about how they can incorporate next generation technologies into their portfolios, this is to be expected. However, I’d hope that these research findings will spur the reseller community to accelerate their own transformation plans and seize the opportunities on offer.”
Angel Business communications are seeking nominations for the 2019 Datacentre Solutions Awards (DCS Awards).
The DCS Awards are designed to reward the product designers, manufacturers, suppliers and providers operating in data centre arena and are updated each year to reflect this fast moving industry. The Awards recognise the achievements of the vendors and their business partners alike and this year encompass a wider range of project, facilities and information technology award categories together with two Individual categories and are designed to address all the main areas of the datacentre market in Europe.
The DCS Awards team is delighted to announce Kohler Uninterruptible Power as the Headline Sponsor for this year’s event. Previously known as Uninterruptible Power Supplies Ltd (UPSL), a subsidiary of Kohler Co, and the exclusive supplier of PowerWAVE UPS, generator and emergency lighting products, UPSL is changing its name to Kohler Uninterruptible Power (KUP), effective March 4th, 2019.
UPSL’s name change is designed to ensure the company’s name reflects the true breadth of the business’ current offer, which now extends to UPS systems, generators, emergency lighting inverters, and class-leading 24/7 service, as well as highlighting its membership of Kohler Co. This is especially timely as next year Kohler will celebrate 100 years of supplying products for power generation and protection. Kohler Uninterruptible Power Ltd prides itself on delivering industry-leading power protection solutions and services.
Our Headline Sponsor, Kohler, is joined by Entertainment Sponsor Starline and Riello UPS as a category sponsor.
The 2019 DCS Awards feature 26 categories across four groups. The Project Awards categories are open to end use implementations and services that have been available before 31st December 2018. The Innovation Awards categories are open to products and solutions that have been available and shipping in EMEA between 1st January and 31st December 2018. The Company nominees must have been present in the EMEA market prior to 1st June 2018. Individuals must have been employed in the EMEA region prior to 31st December 2018.
The editorial panel at Angel Business Communications will validate entries and announce the final short list to be forwarded for voting by the readership of the Digitalisation World stable of publications during April. The winners will be announced at a gala evening on 16th May at London’s Grange St Paul’s Hotel.
Nomination is free of charge and all entries can submit up to four supporting documents to enhance the submission. The deadline for entries is : 8th March 2019.
Please visit : www.dcsawards.com for rules and entry criteria for each of the following categories:
DCS PROJECT AWARDS
Data Centre Energy Efficiency Project of the Year
New Design/Build Data Centre Project of the Year
Data Centre Consolidation/Upgrade/Refresh Project of the Year
Cloud Project of the Year
Managed Services Project of the Year
GDPR compliance Project of the Year
DCS INNOVATION AWARDS
Data Centre Facilities Innovation Awards
Data Centre Power Innovation of the Year
Data Centre PDU Innovation of the Year
Data Centre Cooling Innovation of the Year
Data Centre Intelligent Automation and Management Innovation of the Year
Data Centre Safety, Security & Fire Suppression Innovation of the Year
Data Centre Physical Connectivity Innovation of the Year
Data Centre ICT Innovation Awards
Data Centre ICT Storage Product of the Year
Data Centre ICT Security Product of the Year
Data Centre ICT Management Product of the Year
Data Centre ICT Networking Product of the Year
Data Centre ICT Automation Innovation of the Year
Open Source Innovation of the Year
Data Centre Managed Services Innovation of the Year
DCS Company Awards
Data Centre Hosting/co-location Supplier of the Year
Data Centre Cloud Vendor of the Year
Data Centre Facilities Vendor of the Year
Data Centre ICT Systems Vendor of the Year
Excellence in Data Centre Services Award
DCS Individual Awards
Data Centre Manager of the Year
Data Centre Engineer of the Year
Nomination Deadline : 8th March 2019
www.dcsawards.comAugmented analytics, continuous intelligence and explainable artificial intelligence (AI) are among the top trends in data and analytics technology that have significant disruptive potential over the next three to five years, according to Gartner, Inc.
Speaking at the recent Gartner Data & Analytics Summit in Sydne, Rita Sallam, research vice president at Gartner, said data and analytics leaders must examine the potential business impact of these trends and adjust business models and operations accordingly, or risk losing competitive advantage to those who do.
“The story of data and analytics keeps evolving, from supporting internal decision making to continuous intelligence, information products and appointing chief data officers,” she said. “It’s critical to gain a deeper understanding of the technology trends fueling that evolving story and prioritize them based on business value.”
According to Donald Feinberg, vice president and distinguished analyst at Gartner, the very challenge created by digital disruption — too much data — has also created an unprecedented opportunity. The vast amount of data, together with increasingly powerful processing capabilities enabled by the cloud, means it is now possible to train and execute algorithms at the large scale necessary to finally realize the full potential of AI.
“The size, complexity, distributed nature of data, speed of action and the continuous intelligence required by digital business means that rigid and centralized architectures and tools break down,” Mr. Feinberg said. “The continued survival of any business will depend upon an agile, data-centric architecture that responds to the constant rate of change.”
Gartner recommends that data and analytics leaders talk with senior business leaders about their critical business priorities and explore how the following top trends can enable them.
Trend No. 1: Augmented Analytics
Augmented analytics is the next wave of disruption in the data and analytics market. It uses machine learning (ML) and AI techniques to transform how analytics content is developed, consumed and shared.
By 2020, augmented analytics will be a dominant driver of new purchases of analytics and BI, as well as data science and ML platforms, and of embedded analytics. Data and analytics leaders should plan to adopt augmented analytics as platform capabilities mature.
Trend No. 2: Augmented Data Management
Augmented data management leverages ML capabilities and AI engines to make enterprise information management categories including data quality, metadata management, master data management, data integration as well as database management systems (DBMSs) self-configuring and self-tuning. It is automating many of the manual tasks and allows less technically skilled users to be more autonomous using data. It also allows highly skilled technical resources to focus on higher value tasks.
Augmented data management converts metadata from being used for audit, lineage and reporting only, to powering dynamic systems. Metadata is changing from passive to active and is becoming the primary driver for all AI/ML.
Through to the end of 2022, data management manual tasks will be reduced by 45 percent through the addition of ML and automated service-level management.
Trend No. 3: Continuous Intelligence
By 2022, more than half of major new business systems will incorporate continuous intelligence that uses real-time context data to improve decisions.
Continuous intelligence is a design pattern in which real-time analytics are integrated within a business operation, processing current and historical data to prescribe actions in response to events. It provides decision automation or decision support. Continuous intelligence leverages multiple technologies such as augmented analytics, event stream processing, optimization, business rule management and ML.
“Continuous intelligence represents a major change in the job of the data and analytics team,” said Ms. Sallam. “It’s a grand challenge — and a grand opportunity — for analytics and BI (business intelligence) teams to help businesses make smarter real-time decisions in 2019. It could be seen as the ultimate in operational BI.”
Trend No. 4: Explainable AI
AI models are increasingly deployed to augment and replace human decision making. However, in some scenarios, businesses must justify how these models arrive at their decisions. To build trust with users and stakeholders, application leaders must make these models more interpretable and explainable.
Unfortunately, most of these advanced AI models are complex black boxes that are not able to explain why they reached a specific recommendation or a decision. Explainable AI in data science and ML platforms, for example, auto-generates an explanation of models in terms of accuracy, attributes, model statistics and features in natural language.
Trend No. 5: Graph
Graph analytics is a set of analytic techniques that allows for the exploration of relationships between entities of interest such as organizations, people and transactions.
The application of graph processing and graph DBMSs will grow at 100 percent annually through 2022 to continuously accelerate data preparation and enable more complex and adaptive data science.
Graph data stores can efficiently model, explore and query data with complex interrelationships across data silos, but the need for specialized skills has limited their adoption to date, according to Gartner.
Graph analytics will grow in the next few years due to the need to ask complex questions across complex data, which is not always practical or even possible at scale using SQL queries.
Trend No. 6: Data Fabric
Data fabric enables frictionless access and sharing of data in a distributed data environment. It enables a single and consistent data management framework, which allows seamless data access and processing by design across otherwise siloed storage.
Through 2022, bespoke data fabric designs will be deployed primarily as a static infrastructure, forcing organizations into a new wave of cost to completely re-design for more dynamic data mesh approaches.
Trend No. 7: NLP/ Conversational Analytics
By 2020, 50 percent of analytical queries will be generated via search, natural language processing (NLP) or voice, or will be automatically generated. The need to analyze complex combinations of data and to make analytics accessible to everyone in the organization will drive broader adoption, allowing analytics tools to be as easy as a search interface or a conversation with a virtual assistant.
Trend No. 8: Commercial AI and ML
Gartner predicts that by 2022, 75 percent of new end-user solutions leveraging AI and ML techniques will be built with commercial solutions rather than open source platforms.
Commercial vendors have now built connectors into the Open Source ecosystem and they provide the enterprise features necessary to scale and democratize AI and ML, such as project & model management, reuse, transparency, data lineage, and platform cohesiveness and integration that Open Source technologies lack.
Trend No. 9: Blockchain
The core value proposition of blockchain, and distributed ledger technologies, is providing decentralized trust across a network of untrusted participants. The potential ramifications for analytics use cases are significant, especially those leveraging participant relationships and interactions.
However, it will be several years before four or five major blockchain technologies become dominant. Until that happens, technology end users will be forced to integrate with the blockchain technologies and standards dictated by their dominant customers or networks. This includes integration with your existing data and analytics infrastructure. The costs of integration may outweigh any potential benefit. Blockchains are a data source, not a database, and will not replace existing data management technologies.
Trend No. 10: Persistent Memory Servers
New persistent-memory technologies will help reduce costs and complexity of adopting in-memory computing (IMC)-enabled architectures. Persistent memory represents a new memory tier between DRAM and NAND flash memory that can provide cost-effective mass memory for high-performance workloads. It has the potential to improve application performance, availability, boot times, clustering methods and security practices, while keeping costs under control. It will also help organizations reduce the complexity of their application and data architectures by decreasing the need for data duplication.
“The amount of data is growing quickly and the urgency of transforming data into value in real-time is growing at an equally rapid pace,” Mr. Feinberg said. “New server workloads are demanding not just faster CPU performance, but massive memory and faster storage.”
Nearly 50 percent of PaaS offerings are Cloud-only
Nearly half of today’s platform as a service (PaaS) service offerings are cloud-only, according to Gartner, Inc. Currently, there are more than 360 vendors across 21 market segments, delivering more than 550 PaaS offerings. Forty-eight percent of these offerings are cloud-only. Not a single vendor has a foothold across all 21 segments, and 90 percent of them only operate within a single PaaS market segment.
“Business and technology leaders are shifting to strategic investment in cloud computing,” said Yefim Natis, research vice president and distinguished analyst at Gartner. “Cloud computing is one of the key disruptive forces in IT markets that is gaining mainstream trust.”
“Although many organizations anticipate a long-term retention of on-premises computing, the vendors of nearly half of the cloud platform offerings bet on the prevailing growth of cloud deployments and chose the more modern and more efficient cloud-only delivery of their capabilities,” said Mr. Natis. Enterprise IT spending for cloud-based offerings will surpass spending for noncloud IT offerings by 2022, according to Gartner.
The total PaaS market revenue is forecast to reach $20 billion in 2019, and to exceed $34 billion in 2022, according to the latest forecast from Gartner. In this shift to cloud, database and application platform services represent the largest market segments, with blockchain, digital experience, serverless and artificial intelligence/machine learning (AI/ML) platform services as the newest.
Thirteen percent of organizations implementing Internet of Things (IoT) projects already use digital twins, while 62 percent are either in the process of establishing digital twin use or plan to do so, according to a recent IoT implementation survey by Gartner, Inc.
Gartner defines a digital twin as a software design pattern that represents a physical object with the objective of understanding the asset’s state, responding to changes, improving business operations and adding value.
“The results — especially when compared with past surveys — show that digital twins are slowly entering mainstream use,” said Benoit Lheureux, research vice president at Gartner. “We predicted that by 2022, over two-thirds of companies that have implemented IoT will have deployed at least one digital twin in production. We might actually reach that number within a year.”
While only 13 percent of respondents claim to already use digital twins, 62 percent are either in the process of establishing the technology or plan to do so in the next year. This rapid growth in adoption is due to extensive marketing and education by technology vendors, but also because digital twins are delivering business value and have become part of enterprise IoT and digital strategies.
“We see digital twin adoption in all kinds of organizations. However, manufacturers of IoT-connected products are the most progressive, as the opportunity to differentiate their product and establish new service and revenue streams is a clear business driver,” Mr. Lheureux added.
Digital Twins Serve Many Masters
A key factor for enterprises implementing IoT is that their digital twins serve different constituencies inside and outside the enterprise. Fifty-four percent of respondents reported that while most of their digital twins serve only one constituency, sometimes their digital twins served multiple; nearly a third stated that either most or all their digital twins served multiple constituencies. For example, the constituencies of a connected car digital twin can include the manufacturer, a customer service provider and the insurance company, each with a need for different IoT data.
When asked for examples of digital twin constituencies, replies varied widely, ranging from internal IoT data consumers, such as employees or security over commercial partners to technology providers. “These findings show that digital twins serve a wide range of business objectives,” said Mr. Lheureux. “Designers of digital twins should keep in mind that they will probably need to accommodate multiple data consumers and provide appropriate data access points.”
Digital Twins Are Often Integrated With Each Other
When an organization has multiple digital twins deployed, it might make sense to integrate them. For example, in a power plant with IoT-connected industrial valves, pumps and generators, there is a role for digital twins for each piece of equipment, as well as a composite digital twin, which aggregates IoT data across the equipment to analyze overall operations.
Despite this setup being very complex, 61 percent of companies that have implemented digital twins have already integrated at least one pair of digital twins with each other, and even more — 74 percent of organizations that have not yet integrated digital twins — will do so in the next five years. However, this result also means that 39 percent of respondents have not yet integrated any digital twins; of those, 26 percent still do not plan to do so in five years.
“What we see here is that digital twins are increasingly deployed in conjunction with other digital twins for related assets or equipment,” said Mr. Lheureux. “However, true integration is still relatively complicated and requires high-order integration and information management skills. The ability of to integrate digital twins with each other will be a differentiating factor in the future, as physical assets and equipment evolve.”
Organisations favour a product-centric application deliver model
Eighty-five percent of organizations have adopted, or plan to adopt, a product-centric application delivery model, according to a survey by Gartner, Inc. Although full adoption is rare, overall, survey respondents use the product-centric model for 40 percent of their work in 2018. Gartner predicts that this figure will reach 80 percent by 2022.
“The increase in how quickly and broadly organizations are adopting the product-centric application model doesn’t arise randomly. It goes hand-in-hand with the adoption of agile development methodologies and DevOps,” said Bill Swanton, distinguished research vice president at Gartner. “In addition, an increasing number of applications that IT teams develop are used by external parties, such as clients or partners, and require the increased customer focus that characterizes the product-centric model.”
The survey found that over half (54 percent) of respondents expect to fully adopt the product-centric application model over time, while roughly one-third (32 percent) plan partial adoption (see Figure 1). Managing everything as a product is unlikely to be justified, as some IT activities, such as initial implementation of a large software package, may well be better managed as projects.
Figure 1: Plans for Adopting a Product-Centric Application Delivery Model
Source: Gartner (February 2019)
Mr. Swanton added: “Business leaders are generally unhappy with the speed at which they get application improvements and how they work. Given that no IT organization gets anywhere near enough funding to do everything everyone wants when they want it, product-centric approaches allow faster delivery of the most important capabilities needed. They also force the business to prioritize the work, and to reprioritize it as requirements are better understood or the market changes.
Speed to Market and Digital Business Motivate a Product-Centric Application Approach
Thirty-two percent of the survey respondents identified a need to deliver more quickly as their main driver of adoption of a product-centric application approach. They said that speed to market was the main driver of their transformation process.
Digital business came second (31 percent of respondents). When organizations start a digital transformation journey, they often find that traditional project methods are not suitable for the uncertainties of a transformative business model. They discover a need to adopt agile methods and to treat the results as products, since they will be used by external customers.
The shift from a project-centric to a product-centric application approach does not come without challenges, however. Concerns about project-based funding and the culture clash between “the business” and “IT” were the top challenges for 55 percent of the respondents.
The Rise of the Product Manager Changes the Role of Application Leader
Forty-six percent of the respondents said their organization had already appointed a product manager, while 15 percent plan to introduce this role by the end of 2018. Ten percent have no plans to introduce this role.
According to the majority of respondents, product managers report, or will report, to the IT organization or project management office. At the same time, respondents said they expect the role of application leader to change. For 43 percent of respondents the role will reside in the IT organization, while for 32 percent it will migrate into business teams where the application leader will lead a product line or be a group or single product manager.
“As organizations gain more experience with product-centric delivery models, we expect product and technical leadership to separate from administrative line management. This will have an impact on the prospects of holders of the application leader role, who will need to choose between product management, engineering team management and administrative people management,” Mr. Swanton said.
Information and communications technology (ICT) spending by service providers will reach $426 billion by 2022, representing average growth of 6% per year, as the ongoing shift from on-premise IT management to the "as-a-service" model continues to gather pace in many regions around the world. Cloud and digital service providers will represent the strongest market opportunities for ICT vendors with overall growth of 9% over the forecast period ($105 billion in annual spend by 2022), but communications service providers will still represent the largest share of service provider spending ($254 billion in 2022). Colocation and managed services providers will increase spending by an average of 7% per year, reaching $67 billion in annual ICT spend by 2022. The ICT spending forecast was published in the first Worldwide Black Book Service Provider Edition from International Data Corporation (IDC).
Service providers already account for 44% of worldwide spending on core infrastructure technologies (server and storage hardware, enterprise network equipment, and storage/network software). The pace of investments by cloud infrastructure providers is expected to slow in the next few years as supply/demand and supply chains normalize, but service providers will continue to account for a growing share of overall spending as the ICT market shifts to the "as-a-service" model and new digital ecosystems emerge in which digital service providers will play a key role.
"The shift from end-users to service providers in core infrastructure technology markets is a profound shift with deep implications for tech vendors," said Stephen Minton, vice president in IDC's Customer Insights & Analysis group. "Cloud infrastructure providers, in particular, have led a surge of spending on servers, storage, and system infrastructure software in the past few years. While this will slow over the long term, we expect new digital service providers to pick up the slack in terms of overall ICT spending as new digital ecosystems are created to serve an increasingly digital global economy. For IT vendors, this has major implications in terms of a major shift of spending from end-users to middlemen."
Last year saw a major spending cycle by cloud and digital providers, who increased their share of worldwide spending on core infrastructure technologies from 24% to 28% (from $39 billion to $51 billion). Spending on core infrastructure by cloud and digital service providers will increase at an average rate of 10% per year over the next five years, compared to growth of 3% by commercial end-users.
This shift has been particularly rapid in the United States, where cloud and digital providers already account for 43% of core infrastructure spending and will make up 47% by 2022, with an average growth rate of 8.5% compared to meagre growth of 1% by commercial end-users. Cloud and digital providers account for a smaller share in China (around 25% in 2018) but will increase spending on core infrastructure by 22% over the forecast period. In Western Europe, cloud and digital providers make up only 12% of core infrastructure spend and this will barely change over the forecast period despite growth of around 6% per year on average. This is largely because a major capital spending cycle in 2018 will be followed by much weaker increases over the next few years as service providers in Western Europe take stock and balance their capital spending needs with end-user demand for services in a more scalable manner amidst a cautious economic outlook.
"There is significant variation by region in service provider spending and growth," said Minton. "For example, cloud and digital providers still represent a relatively small share of spending in the Europe, Middle East and Africa (EMEA) region, where they will account for just 11% of core infrastructure spending by 2022; or Canada, where they will make up just 9%. In the U.S., by contrast, cloud and digital providers will represent almost half of all spending on core infrastructure technology by the same year as a result of the phenomenal growth of cloud providers in the U.S. market, which are serving aggressive early adopters of cloud infrastructure and software as a service."
"The implications for ICT vendors are profound," said Minton. "Not only are their traditional customers moving away to a new model for ICT resource procurement, but they are faced with an increasing share of revenue concentrated in a relatively small number of customers. Of course, many ICT vendors are chasing the service provider trend by transforming their own business models accordingly, positioning themselves as service providers with offerings in the cloud space in particular. However, for many of these firms, revenue from these new business opportunities still represents a relatively small share of overall sales, leading to a bumpy transition. Meanwhile, we expect new digital service providers to add significant disruption to the overall market in the next ten years as the global economy shifts into an 'as-a-service' model for B2C transactions in particular."
IoT spending to grow by 20 percent
Revenues for the European Internet of Things (IoT) market are forecast to increase by 19.8% year on year to reach $171 billion in 2019, according to the latest update to the Worldwide Semiannual Internet of Things Spending Guide from International Data Corporation (IDC). Total spending on IoT solutions in Europe will maintain a double-digit annual growth rate throughout the 2017–2022 period and is expected to surpass $241 billion in 2022. The forecast is based on IDC's research into the expanding IoT technology market, which addresses business investment opportunities and use case implementations across a spectrum of industries.
While the appetite for IoT solutions is evident across the entire region, Western Europe will account for the lion's share of the market. Germany will be the European IoT champion in 2019, with spending exceeding $35 billion. Adoption of IoT technology in other European countries will also soar, with France and U.K. each spending over $25 billion, followed by Italy with $19 billion. Central and Eastern Europe (CEE) will account for 7% of the total European IoT revenues in 2019.
"We’re still just scratching the surface of how powerful IoT solutions can be when combined with the massive scale of IoT endpoints, world-class connectivity and advanced technology," said Milan Kalal, program manager at IDC. "That said, organizations across industries are gradually experiencing that driving business outcomes with IoT requires not only new technologies and expertise in areas like edge infrastructure, wired and wireless networking, security, and edge-to-cloud architectures, but also viable use cases that deliver short-term results and help drive a strategic IoT innovation road map."
The industries that are forecast to spend the most on IoT solutions in 2019 are discrete manufacturing ($20 billion), utilities ($19 billion), retail ($16 billion), and transportation ($15 billion). IoT spending among manufacturers will be largely focused on solutions that support manufacturing operations and production asset management. In the utilities sector, IoT spending will be dominated by smart grids for electricity, gas, and water. Omni-channel operations will be the single largest use case within the retail sector. In transportation, two thirds of IoT spending will go into freight monitoring and logistics solutions. The industries that will see the fastest annual growth rates throughout the 2017-2022 period are retail (18.5%), healthcare (17.9%), and state/local government (17.1%).
All that said, the true leader for IoT spending in 2019 will be the consumer segment, with revenues exceeding $32 billion. The largest consumer use cases will be related to the smart home, personal wellness, and connected vehicles. Within smart home, home automation and smart appliances will both experience strong spending growth over the forecast period and will help make consumer the fastest-growing industry segment overall with a five-year CAGR of 20.0%.
Forecasting spending growth by use case over the 2017-2022 period provides a picture of where other industries will be making their IoT investments. These include in-store contextualized marketing (retail), airport facility automation (transportation), smart buildings (professional services), agriculture field monitoring (resource industries), and smart home (consumer).
"We are now experiencing a dichotomy scenario across European IoT adopters: While a few advanced users are leveraging IoT technologies in full swing, there is still a large portion of users struggling to prove and replicate initial pilots and proof of concepts," says Andrea Siviero, a research manager at IDC.
Hardware will be the largest technology category in 2019 with revenue of $66 billion led by module and sensor purchases. IoT services will be close behind at $60 billion going toward traditional IT and installation services as well as non-traditional device and operational services. IoT software spending will total $35 billion in 2019 and will see the fastest growth over the five-year forecast period with a CAGR of 18.9%. IoT connectivity spending will total $10 billion in 2019.
Storage capacity to double by 2023
In its inaugural Global StorageSphere forecast, International Data Corporation (IDC) estimates that the installed base of storage capacity worldwide will more than double over the 2018-2023 forecast period, growing to 11.7 zettabytes (ZB) in 2023.
IDC's Global StorageSphere measures the size of the worldwide installed base of storage capacity across six types of storage media, and how much of the storage capacity is utilized or available each year. The installed base of storage capacity is a cumulative metric: it is equivalent to the amount of new storage capacity deployed over the span of several years minus the systems or storage devices that either fail or are retired or decommissioned each year. The Global StorageSphere is a product of IDC's Global DataSphere program, which measures how much new data is created and/or replicated each year.
"The Global StorageSphere is large and diverse, encompassing many different storage technologies, and growing rapidly," said John Rydning, research vice president at IDC. "Within the Global StorageSphere, enterprises, especially cloud providers, are increasingly being relied upon to store and manage the world's repository of data."
Key findings from the Global StorageSphere forecast include the following:
Can data centres handle sporadic faults?
Asks Tony Lock, Director of Engagement and Distinguished Analyst, Freeform Dynamics Ltd.
Data centres have been providing services to users for decades and the expectation has always been that system failures and interruptions would be kept to a minimum - and perhaps more importantly, that should something go wrong it would be fixed as quickly as possible.
The problem is that the popular idea of what constitutes an acceptable minimum number of interruptions has fallen so low that many IT services are now expected simply to always work, without fail. And, goodness knows, if something does go wrong, the expectation is that it will be fixed within minutes, if not seconds. This has made software reliability a critical issue for both the vast majority of enterprise users and the technical staff whose job it is to keep everything running smoothly (Figure 1).
A recent report by Freeform Dynamics (link: https://freeformdynamics.com/software-delivery/optimizing-software-supplier-customer-relationship/) highlights the fact that despite considerable effort and expense, and the development of new architectures and operational processes, IT services still suffer interruptions. This is true across organisations of all sizes and in all verticals, and it applies to the producers of software as well as to the users of commercial applications (Figure 2).
Major incidents and intermittent problems abound
Over a third of respondents reported major incidents such as complete system failure and data corruption. But while major systems failures can have a huge impact, DR plans and procedures often exist to deal with them, with the expectation that key resources will be mobilised immediately. In some ways more challenging are issues that crop up and then disappear. If you’re lucky, such intermittent problems could simply be caused by bugs in the application software in sections that are only run in unusual circumstances, relatively easy to track down, if not always to remediate.
More problematic are failures that only take place due to an unusual combination of the application code execution within the overall systems environment, potentially including factors from the operating system, middleware stacks, virtualisation platforms or even the precise nature of the physical environment. And such unusual combinations of elements are more likely to happen as systems become more dynamic in terms of how resources are allocated at execution time.
Indeed, more respondents stated they had experienced sporadic problems, such as intermittent software failure or intermittent processing corruption, unexpected program behaviour or unpredictable performance, as compared to those reporting major systems outages. And when it comes to the impact of these software issues, the survey paints a disturbing picture (Figure 3).
Major issues are, of course, almost without exception extremely disruptive, but intermittent problems are also recognised as being disruptive by many. Plus it’s not just corruption or unexpected behaviour that cause disruption - performance inconsistency is also important. Taken together, it is clear that the reliability and responsiveness of software are much more than simply nice-to-have features. But why do such issues cause data centre professionals problems (Figure 4)?
Anyone who has worked in an enterprise data centre will identify with it being hard to troubleshoot software issues in a complex environment. And as was mentioned earlier, the steady move towards dynamic and self-reconfiguring data centres is adding yet more intricacy to an already complex situation. Factor in the way that applications themselves are often constructed to access external services via API calls, and ‘complex’ is no longer an adequate description: ‘convoluted’ may be more accurate.
Merely recreating the conditions under which an incident occurred can be difficult, especially if there is no record of the precise conditions that were then in play. The results show that the difficulty of reproducing intermittent problems is a challenge for everyone. And over ninety percent of respondents reported that problems which drag on while no one can explain the cause have a major negative impact on business user and stakeholder confidence.
This indicates very clearly that it is not just major incidents that erode stakeholder confidence in both IT and the commercial suppliers of software. Once again, almost nine out of ten respondents said that even if they have relatively few big incidents, a constant stream of minor but irritating issues, or simply users having to use disruptive workarounds, can erode stakeholder confidence. The same can happen even if a problem is rapidly identified, but remediation takes a long time. A lack of stakeholder satisfaction is bad news for everybody - IT staff, software vendors and business users alike.
Is there an answer?
Every enterprise data centre has an abundance of management and monitoring tools at the disposal of IT staff, yet these problems still occur. So is there something else that could be used to help? In particular, the survey asked for thoughts on a relatively recent development, namely program execution ‘record and replay’ technology.
From a data centre perspective, it’s the equivalent of an aircraft’s ‘black box’ flight recorder for your production environment. Once enabled, it can give an accurate picture of the IT environment when a problem occurs, and of what the software actually did before it crashed or misbehaved. The idea is to catch failures in the act, thereby making them completely reproducible and greatly accelerating both diagnostics and issue resolution. But do IT professionals see value in such tools (Figure 5)?
The Bottom Line
The answer is a qualified “yes”. Relatively few IT professionals, from either the software vendor or enterprise data centre, have actively used this type of technology so far, but there is a consensus that it has potential, while only around a quarter of respondents think not. But as application technology evolves and software-defined data centres become more autonomous, we will see more of the highly variable, complex interlinked system environments where intermittent software problems are likely to occur. Record and replay technology could well have a significant role to play in keeping business users and customers happy, and just as importantly, in raising the perception of IT and software vendors in the eyes of the ultimate budget holders.
The DCA supporting STEM Learning with a student’s STEM Tour of DCW
By Steve Hone, CEO, The DCA
The DCA’s role varies from event to event however, for Data Centre World London, we have provided advice and guidance for many aspects of the show this year, these include the concept and content for the 6th Generation Data Centre exhibit, supporting STEM Learning with a student’s STEM Tour, advising on content for the conference programme along with recommendations for speakers, moderators and chairs.
Closer Still Media host conferences all over the world and as global event partners the DCA support and promote these events to the general public and our followers through its marketing and social media channels both on a regional and international level.
The DCA are particularly proud to have been involved with the STEM Learning Tour at DCW this year which co-insides with British Science Week. As a Trade Association we have a key role to play in helping to facilitate projects and initiatives that promote careers in our sector.
With support from DCA Members and the DCA Workforce Capability Group Engage Programme Wednesday 13th March will see thirty Year 8 students attending DCW to tour the show and take part in a series of speed networking sessions with industry professionals from across the sector which will kindly been hosted by ABB in their VIP lounge.
Amongst the professionals supporting the tour are:
The aim of the tour and speed networking sessions is to raise the profile of the Data Centre as a potential career of choice with the students. In addition to the tour and networking sessions the students have been invited to various exhibitors’ booths to find out more about the specific technology we deploy and to gain a clearer understanding of the mission critical role the data centre plays in supporting our increasingly digital world.
The students will be able to ask questions during the speed networking, they will also hear about what it’s like to work in the sector, the exciting career opportunities available to them and the choices they will need to make going forward to ensure they have the right qualifications to enter the sector.
It goes without saying that the DCA is very excited about this initiative, which will hopefully be the first of many as we encourage more Trade Association Members to join the growing number of DCA STEM Ambassadors.
I would like to thank all those members who contributed articles this month including Elliot Shaw from Eight6Seven, Steve Bowes Phipps from PTS, Terri Simpkin from CNet and Vinous Ali from Strategic Partner, techUK. Next month the theme will be focused of DC Refurbishment and Technology Re-use, please contact Amanda McFarlane at amandam@dca-global.org for more information if you would like to submit content for consideration.
How to solve a problem like Guiseppe!
The Data Centre Talent(less!) Merry – Go – Round
By Elliot Shaw, MD, Eight6seven Recruitment
Let me tell you about Guiseppe – a talented engineer with 17th and 18th Edition, NVQ with some Mechanical bias and 15 years of experience working for FM companies and clients direct in Maintenance and/or PPM across several different locations such as commercial buildings, universities, hospitals etc. Guiseppe is keen to change direction and really wants to get into the Data Centre market so his CV was sent to three Data Centre owners and three large FM companies running multiple contracts across various Data Centres.
How many interviews did Guiseppe get?
While you contemplate that question let’s talk for a while about the UK Data Centre market and how its projected growth is most probably unsustainable with the current talent pool of PPM Engineers.
If we take reports at face value, we currently have a Colo market with 270 sites across the UK, add to this a guesstimate (I use the word guestimate as no one really knows as the government are quite tight lipped about such information!) of the number of privately or government owned DC’s and we most probably get to North of 600 DC’s from Lands End to John O’Groats. Lets agree on 650 DC’s in the UK right now, each one with a three man four shift policy that make a total of 7,800 DC experienced and qualified PPM Engineers including both ICT and M&E.
Sounds like a lot, more than enough to go around I hear you say but here’s where it gets interesting; 20% of this number are just about to retire, 20% will leave the industry for more money or something a little less stressful and 5% will change career completely. So, in the next three years around 3,510 current DC engineers will leave the industry, some for good and some just for a year or two, in addition to this the industry will grow between 8-10% in the same time frame - we are looking at a deficit of around 3,825 engineers or well over half! I hear you gasp, ‘What will happen to my data? How will I know if it’s safe?’ Or if you own or run one of these Data Centres. ‘Rubbish! No one will leave, all DC Engineers are super human, so they’ll never retire’.
Challenging as this may sound it’s not all doom and gloom, the industry is investing heavily in educating the engineering market and promoting the fact that the Data Centre is a great place to work; there are also many new initiatives to get into schools and teach the kids that the Data Centre is cool and that it allows their iPhone to work! Oh, sorry I think I dosed off and must have been dreaming as none of this is happening and if it is, it’s all happening too slowly and not being directed at the right people.
It’s imperative that ‘the powers that be’ understand the problem, the skills shortage is happening now and in three years it will be too late! The vision of all those Data Halls filled with tumbleweed rolling down the aisles (hot or cold) as it’s been three months since an engineer checked the equipment is amusing and scary at the same time.
As someone who recruits PPM engineers you may think that this won’t effect you as your workforce is younger than the average or you pay well, but I would bet my bottom dollar that even without the above factors you have lost 20% of your engineers over the last year to what I call “The One Year Migration”. This describes Engineers that hunt the dollar and move for an extra £1,000 or just because the Data Centre is twenty minutes closer to where they live. This just so you know, will never change as loyalty is generally not a massive consideration at this level.
What can be done I hear you ask – some easy options would be to pay more, invest more in your staff, give better benefits, provide improved working conditions, operate progressive shift patterns or if all else fails build Data Centres in the back garden of every engineer!
As someone that owns and runs a Data Centre Recruitment company I understand that none of the above are easy or even possible options. I hear this all day every day from my hungry team of researchers. However a very small handful of clients who have listened and are now willing to think bluesky by interviewing candidates like Guiseppe that don’t have Data Centre experience have seen some surprising results as ‘good engineers are good engineers’ whatever their background.
Salaries are without a doubt an issue, as even though the DC market purports to be highly skilled many engineers that we speak to are earning more working in a non-critical environment, this needs to be addressed and soon to stop the 20% leaving the industry annually.
I do admit to poetic licence when using the facts and figures to highlight the situation but honestly, I really don’t think it’s far from the mark. Over the last year my company Eight6seven has launched a new dedicated Engineering Division for the Data Centre and Mission Critical Environment, good DC experienced engineers are very hard to come by and without putting myself out of work some of the suggestions above need to be put in place otherwise recruiting the right calibre of engineers will always be tough - so happy days for me! As they say in Harry Potter “its going to be a bumpy ride” but for those with some foresight you should be fine, for everyone else you are going to need some expert advice and help so look me up and make contact – happy to chat any time!
And what of Guiseppe? Well he was turned down by everyone as he had no DC experience, so we went back to every client and suggested they re-think, think long term and two did and he landed one of the roles and to this day both client and candidate are very happy.
Can reach Elliot Shaw - email: elliot@eight6seven.co.uk | web: www.eight6seven.co.uk
Steve Bowes-Phipps, Senior Data Centre Consultant, PTS Consulting and Chair DCA Workforce Development SIG
Imagine my surprise the other day, as I caught up with the latest pop charts on 4Music, to see an advert run by the NHS regarding the role of Data Centre Manager and featuring Sue Lang from East Kent Hospitals. What I found most encouraging about this advert was the fact that the term Data Centres was mentioned at all on mainstream TV although Sue wielding a screwdriver may not have represented the role in its best light. But more encouraging was the fact that she is a woman in a role heavily dominated by men. A fact not untypical across the science and technology sectors.
A curious statistic around female participation in Science, Technology, Engineering and Maths (STEM), demonstrates that girls and boys are roughly equal in studying STEM subjects at school, but the girls don’t stay the course and by the time they take A-Levels, their participation has dropped to 36% according to figures from 2018[1], with Computing attracting a stubbornly low participation rate of just over 10%. By the time gender inclusivity figures are collected from those in full-time employment, this has dropped yet again.
Demonstrating the business value and role of the Data Centre Manager to today’s younger demographic (unfortunately not including me) watching music charts is great publicity for the sector as a whole, which, let’s be honest, is one of the mechanical, electrical and software engineering industries’ best kept secrets and has been since it first appeared back in the 1950’s. The bald fact that we have to deal with is that many of the sectors’ leading lights are due to retire within the next five-ten years and this leaves a massive hole of expertise and experience to fill. Encouraging diversity is therefore fundamental to our technological future.
Increasingly the Data Centre is software defined, as management and control systems drive the integration and collective automation of plant and equipment, responding to varying IT services demands. Many of the software engineers that will blend electrical and mechanical engineering with strong systems and software engineering don’t exist today. Current modules of study at schools, colleges and universities are ill-equipped to provide the necessary technical blend of education.
So, the question is: why are we worrying about future talent, when we are told that today’s millennials are IT-Literate ‘out of the box’? Well, the uncomfortable truth is that this is a generalisation that rarely stacks up in reality – yes, youngsters seem very at home using technology in their everyday lives, but it takes a special kind of technical curiosity to want to turn that into their career. And where do they go to get advice? Teachers are very unlikely to understand the opportunities available in the Data Centre sector. The same is true of lecturers, career advisors, job centre staff, etc. The British Computer Society (BCS) appears to regard the Data Centre Sector as only minimally related to IT – more Institute of Engineering and Technology (IET). We have struggled to get them to take us seriously despite having a Special Interest Group within the BCS (the Data Centre Specialist Group) that has been active for over ten years.
When I talk to my industry peers, more often than not, they state that they ‘fell into’ the Data Centre Sector, but as a recruiter of people within my own organisation, I know the kind of salaries they command and therefore the desirability of these high levels of remuneration as a career goal. Plus – this is a highly innovative, technically interesting sector to work in. There is room in this sector for people with all kinds of skills and interests, from finance and accounting, through Sales and Marketing, IT support, and technical problem solving.
I Chair the Workforce Development Group for the DCA. This is a voluntary group that explores how we can educate and signpost to interested others, how to identify ways of recruiting people of all levels, all ages and genders who may have the appropriate aptitude and attitude to work in a critical infrastructure environment. Also, to understand what career choices they can make for their own development, including access to development and training appropriate to this sector. I welcome others who wish to contribute to this discussion and sincerely hope there are more Sue Lang’s in the near future, so that a woman holding a senior role in the sector is no more remarkable than having a woman Prime Minister.
Our SiG meetings are open to all, so if you feel you have something you can contribute to the discussion, please look out for the next meeting this year on the DCA website.
[1] http://www.sciencecampaign.org.uk/news-media/case-comment/rise-in-stem-popularity-amongst-a-level-students.html
Change will become the new normal
Vinous Ali, Head Of Policy at techUK
The technologies that are emerging and becoming more mainstream in our economy like artificial intelligence, robotics and IoT devices mark an unprecedent change. Whether you call it the Fourth Industrial Revolution, Society 5.0 or another one of the myriad of buzzy names, this change presents us with enormous opportunities but also equally difficult challenges that we must confront head on if we are to make it a success for humanity.
When people talk about the Fourth Industrial Revolution, they often speak about it as a static moment in time – a moment of huge upheaval and then a settling down. The truth of the matter is these new technologies will become ever more sophisticated and integrated with each other – creating wave after wave of change. Change will become the new normal.
And to perhaps stretch the sea-based metaphor to its very limit the UK is in a strong position to ride these waves. The population is strong users of technologies, our economy is well developed, and we have tech innovation happening up and down the country. But, in order to capitalise on the Fourth Industrial Revolution, we should have started to prepare yesterday.
The pace and scale of this change will be unlike anything that has come before. The speed at which breakthroughs are now made and commercialised is set to continue with prices falling as it does. The benefits of this are clear: a rise in global income level; improved quality of life; increases in efficiencies and productivity at work and in the home and for businesses costs will drop enormously from transportation to communication.
These do not need to be rehearsed at length. The challenge will be how do we make sure that these benefits are felt by everyone and that we leave no one behind.
Educating the Future
It all starts with education. Dell has estimated that 85% of jobs that will exist in 2030 don’t exist today. So, the question is how do policy-makers and educators prepare children today for a future that is shrouded in mystery?
We know that automation will play a bigger role in the job market in the future and many roles will involve working hand in hand with machines – interpretation of data, finding creative solutions. These are skills that the World Economic Forum has dubbed “human” skills. A slightly strange term perhaps and more often referred to as soft-skills. All of which need to be nurtured.
In the Summer techUK surveyed over a hundred parents working in tech who had children between the ages of 5 and 17. We asked them for their views on the UK’s education system today and how they felt it needed to change.
Now, these parents do not have crystal balls, but they do have insights into the technological journey we are on and they certainly had opinions.
73 per cent of those surveyed felt the curriculum did not place sufficient emphasis on the types of skills that would become more vital in the future world of work. With, 65 per cent of parents feeling that a stronger focus on soft-skills was needed both at primary and secondary school than that which currently exists.
This doesn’t mean ripping up the curriculum and starting from scratch. Rather, ensuring that the way subjects are taught engage a young person’s creativity, encourage team work and allow them to develop these softer skills whilst also building knowledge.
But whatever we do in our schools, the parents we surveyed were clear that education doesn’t stop on a child’s 18th birthday. A whopping 91 per cent believed that regardless of their education today, their children would have to retrain in the future.
An education system that is built around the fact that lifelong learning is a must for everyone requires more than a change in our institutions and increased investment. It requires a shift in mindset and culture.
It requires employers seeing training as an investment rather than a cost. It requires the public understanding what retraining can offer them. And it requires it being open to every single one of us – regardless of our background, personal situation or where in the country we live.
The UK has started down the road on this journey. The Institute of Coding – co-chaired by techUK President Jacqueline de Rojas and jointly funded by Government and Industry aspires to ensure everyone can build the skills they need for the future. Creating a new learning experience. A blend of traditional, radical and student-centred learning brought about by the first national collaboration of leading universities and innovative businesses.
These new ways of thinking and doing will be key to unlocking the UK’s success. We must start in schools, definitely but if we truly want to thrive learning needs to be for life.
Workforce Sustainability – The long journey to a people centric approach to business.
Dr Terri Simpkin, Higher and Further Education Principal at CNet Training
While we’re quite conditioned to understand what environmental sustainability means in broad terms. A good deal of effort goes into minimising impact on the planet’s resources and we all have in some form or fashion changed our behaviours to become a little bit more ‘green’. Less focus however, is placed upon workforce sustainability and people as a critical element of global digital infrastructure. Sustainability of the data centre sector workforce is much less understood and discussion is more usually confined to big ticket, single issue matters such as skills shortages.
Organisations in the digital infrastructure space quite rightly celebrate their efforts to minimise resource use and the search for reliable, sustainable sources of power, for example. But what about the people? The workforce that keeps the lights on also needs to be viewed in sustainability terms but it is here the that the digital infrastructure sector needs to improve its game.
What is workforce sustainability?
Workforce sustainability takes the view that just as burning coal or using water impacts our physical environment, the way in which organisations employ the labour (physical and emotional) of people must also be considered.
It’s well known that work in a data centre can be stressful. The risk of outage and the costs that are incurred as a result generate significant concern. From making sure that facilities ‘keep the lights on’ through engineering means to ensuring that people are fit for work by managing the human factors of risk mitigation, sustainability of both hardware and ‘wetware’ is of concern to data centres. So, like the hardware, people need to be maintained properly if they’re to stay healthy enough, skilled enough and engaged enough to remain in the organisation, or the sector.
The digital infrastructure sector is doing well on environmental sustainability, but on people… not so much.
While the sector is innovating to deliver clean, green outcomes particularly in regard to energy use, the sustainability of the workforce has a long way to go. In fact, there is very little discussion that consolidates a range of disparate issues into one considered approach to workforce sustainability.
Just last month the Centre for Construction Research and Training published a report[1] that provides a tool to identify and measure workforce sustainability for the construction industry in the U.S. The findings are equally as applicable to the digital infrastructure sector. Similarities such as an aging workforce, skills and labour shortages, time critical conditions and a lack of diversity are, of course, the most convincing similarities.
So, what did they come up with?
Workforce sustainability needs to be defined, managed and overtly evaluated if organisations are to implement effective measures to deliver and maintain sustainability over time. The report delivered by Oregon State University and the University of Florida suggests a sustainability model that “includes the process of hiring and facilitating an environment for a coherent (and) viable (organisation with) healthy individuals who are highly skilled and competent, and then nurturing and maintaining the requisite skills and competencies constantly” (The Center for Construction Research and Training, 2019). The model is eminently applicable to data centre organisations and the sector as a whole to deliver a comprehensive and consolidated approach to workforce sustainability.
So, what should we do?
The model suggests that organisations should address eight elements. Here are the most pressing for the digital infrastructure sector.
Nuturing – recruitment, selection, onboarding and continual professional development delivers value through ensuring that people are supported in an ongoing process of learning. This of course is one of the key challenges facing data centres. People management as a whole is not as ‘joined up’ as it should be, and talent shortage resolutions are more likely to be ‘outsourced’ to external providers rather than being seen as a ‘life-long’ learning approach that might include apprenticeships, blended learning and executive leadership development. This element is fundamental to minimising staff poaching, wage inflation, long term vacancies and, of course, mitigating risk to critical services.
Diversity – a recent Uptime report[2] suggested that over 70% of respondents felt that having less than 6% representation of women in the workforce was not a problem. This indicates a fairly fundamental cultural and social problem. Not having a diverse workforce is one thing, not seeing it as a problem is, in fact, a bigger one.
A raft of research identifies the business case for diversity in all its forms but from a workforce sustainability perspective, it’s a ‘no-brainer’. A sector with a sustainable workforce will seek to expand its pool of labour, provide opportunities for a range of skilled and capable people and, most importantly, develop a culture of inclusion that reaps benefits from a diverse workforce. This is where the sector has a very long way to go indeed. Where a significant proportion of people inside data centres are unable to see that a homogenous workforce is one that is at odds with the demands of the second machine age, culture, behaviours and values will always be at odds with sustainability principles.
Health and wellbeing – anecdotally, workplace stress and burnout is common and general wellbeing is under threat in the digital infrastructure sector. One of the most cited challenges to learning on the CNet Masters Degree program for example, is the lack of capacity to take time out for development opportunities. Students regularly tell of working 80 hour weeks, gruelling travel obligations and lack of time with family. Generally speaking, workplace stress is rising and a report by Korn Ferry[3] suggests that the continual need to learn new skills to keep up is one of the underpinning reasons. Clearly, unsupported professional development not only diminishes the capacity of organisations to work effectively, but it has a negative impact on individuals too.
The other five elements include:
Equity – the extent to which people believe themselves compensated fairly and treated without discrimination in regard to pay, promotion and access to opportunity.
Connectivity – the extent to which people believe themselves able to communicate within their workplace and have their voice heard in the organisation and with leaders is key to contemporary workplaces, particularly those in mission critical sectors and those relying on innovation as a core competency.
Value – how much people believe themselves to be respected and how they and their families are valued by the organisation both financially and emotionally is fundamental to loyalty and engagement.
Community – feeling connected to the organisation as a community with a sense of belonging, and cohesiveness.
Maturity – This one is key. An organisation displaying a commitment to workforce sustainability will not only be able to demonstrate achievement in all of the other elements but is able to identify and measure the extent to which efforts are bringing value to the organisation and to its people.
The digital infrastructure sector is a long way from that ideal, but with some concerted effort, a consolidated people strategy and a commitment to change the culture, our workforces can become more robust. With a strategy to develop supportive behaviours from a base of people centric values, the sector could be on its way to a sustainable workforce in much the same way it’s heading for environmental sustainability.
Enterprise technology is now advancing at breakneck speed and businesses are fighting to stay on top of it in the race to achieve full digital transformation. Companies are constantly growing and intelligent communication systems, which offer seamless interaction for large workforces, are now fundamental in the modern workplace.
By Charlie Doubek, VP of Professional Services at Arkadin.
Utilising digital tools to improve efficiency and employee workflow has led to the continued rise of Unified Communications (UC) systems, as companies look to the latest technologies to enhance collaboration and increase productivity. With smart interactions at its core, UC has become deeply ingrained in the modern workplace and are now enabling real-time communications and better ways of working for fast-growing teams.
With the global UC market expected to grow to $143 billion in value by 2024, there is no doubt that companies will continue to invest heavily in technology to enhance the communication capabilities of their staff help simplify tasks and better connect their business.
AI in the enterprise
Artificial Intelligence (AI) in the modern workplace is now a huge talking point across the business landscape. Its ability to enable sophisticated predictive intelligence has garnered many column inches and companies are now experimenting with the technology in various areas of the business. So much so AI is now positioned as a key driver of the enterprise in years to come, with 91% of enterprises expecting AI to deliver new business growth by 2023.
Workplace communication is undergoing its own digital transformation and the power of AI is now being heavily discussed in relation to the next generation of UC-based systems. By analysing data better and faster, AI has the ability to save time, enhance employee experiences and drive collaboration across entire workforces, through the addition of more sophisticated communication features. For example, interactive whiteboards, cloud-based collaboration software and document management tools now all sit comfortably within the realm of UC.
Furthermore, AI and machine learning systems can use existing workforce data to enable predictive intelligence which can map out the specific requirements for future communications across the workplace. Using intelligent analysis, tasks such as outlining the appropriate team members required for certain communications and the most suitable time for meetings to occur can be anticipated and automatically booked in. Making these decisions faster can therefore not only reduce the time spent on manual team coordination but also free up more time for employees to concentrate on important tasks and allow effective collaboration to take place.
From the home to the office
AI continues to make huge waves across the consumer space and is now used widely by individuals across a range of devices - in their home, on their mobile and even in their vehicles. While there are still challenges with wide implementation, some technology providers are now finding ways to integrate popular AI-based technologies into existing UC systems.
There are many powerful consumer based-AI tools being added to UC conversations, such as Cortana, Alexa, and the toolsets in Google Home that we will all be able to access and build solutions off of. However, it is now more practical to use innovative startup technology in this space to deploy customised workflows and integrations that are both flexible and tailored to immediate needs.
Building customised AI Assistant Applications using the Zoom AI powerset for example, can now be added to Microsoft Teams or other major chat interfaces and programme customised workflows in hours, when in the past this could even take years. The Microsoft Azure stack also opens businesses up to using into ready to go APIs that leverage facial recognition, live translation services, and even video clip search tools in Microsoft Stream to make your AI-driven Assistants even more powerful.
Future AI-enhanced UC systems could also open the modern workplace up to the possibilities of using automatic identification technologies, such as facial recognition to help staff enter and initiate meetings, authorise external attendees and track speakers throughout. Should AI be developed for the UC enterprise and used at scale, businesses can break down barriers to coherent workforce interaction, save time and energy for staff and make the communication experience better for everyone involved.
Overcoming the challenges
Yet, the rate at which it is developing comes not without its challenges. Despite the exciting possibilities, the technology still remains in its very formative stages across the UC sector. Early AI-enabled systems would require a significant investment from companies of both time and money to upgrade their communication systems and tailor features to the needs of staff members.
What’s more, these robust systems will also inevitably create a knowledge gap and will require significant training for staff to utilise these tools effectively. Companies thinking about an AI-based comms future must, therefore, take the necessary steps to provide staff with the tools and development needed to both access and benefit from the technology’s capabilities.
Looking ahead to the future of UC, an AI-enhanced future is most certainly on the horizon. It’s opportunity to overhaul outdated systems and make way for more intelligent communication, better decision-making and more productivity is clear and already showing some signs of its true value. Should AI continue its advance and overcome current barriers to adoption, employees can ultimately look forward to more intuitive communication in a UC-driven enterprise environment.
Digital transformation is more than just a buzzword; it is a critical process for businesses that want to keep pace with today’s mobile-first consumers. It is also non-negotiable, with the Digital Transformation 2018 report stating that 40% of organisations will no longer exist in 10 years if they fail to bring in new technologies to power digital transformation.
By Ian Massingham, Director of Evangelism, Amazon Web Services.
One of the key enablers of digital transformation is cloud computing, the new normal for organisations large and small. The cloud lets organisations experiment and innovate cost-effectively, helping them move fast to gain competitive advantage and give customers the flexibility and choice they demand.
While demand for cloud computing is growing, even in government organisations and regulated industries, so too are the requirements that organisations place on their cloud partners. They are no longer looking for a technology partner for life; instead, they want the solutions and scalability that will help them adapt to the fast-changing needs of their customers.
A relationship of trust
Cloud computing today delivers much more than virtual machines or virtual storage. It allows customers to move fast, operate more securely and save substantial costs while benefiting from scale and performance. Many cloud providers also deliver services that help organisations deliver on digital transformation projects, from analytics and artificial intelligence to security, virtual reality and application development.
However, the crux of an organisations relationship with its cloud provider is trust; when your business infrastructure lives in the cloud you need that cloud to be available and resilient. This is why customers expect always-on uptime, world-class security and competitive pricing. This trust can be easily broken by issues including service disruptions, price hikes and security lapses. Should this occur, organisations want the option of walking away. Sadly, some cloud providers make this difficult with efforts to ‘lock-in’ customers.
Moving away from vendor lock-in
Before the advent of cloud computing, traditional IT systems were delivered via long term contracts and upfront payments. Cloud changed this, enabling organisations to pay for only what they need on a subscription basis. This approach not only provides a financial benefit, it also lets organisations walk away at any time, moving to another provider that can restore their trust or better help them evolve in line with changes in their marketplace.
Customers have responded well to the cloud model and are looking to get away from the “old way” of doing things and avoiding the “lock-in” associated with high hardware costs, additional software licensing and support needed for in-house data centres. In fact, the frustration with vendor lock-in has been exacerbated as many contracts force customers to postpone their move to the cloud.
“We are looking to adopt a cloud-based model as far as possible but we have a lot of legacy applications and a lot of long-term contracts which we can’t really seek to re-procure with something already in place, ” said Marion Sinclair, Head of Strategy and Enterprise Architecture at Kensington and Chelsea Council.
New platform, same frustrations?
On transitioning to a cloud environment, organisations expect to be free of vendor lock-in. Unfortunately, some cloud providers are pushing technologies that effectively keep their customers locked into their platform. One example of this is the cloud abstraction layer where vendors add a layer between their infrastructure and the app. This layer hides the implementation details of how app functionally operates, making it difficult to move to a new cloud platform.
Customer’s moving to the cloud must be careful to avoid more lock-ins; they can make organisations less nimble and adaptable to changing business conditions, and can also greatly impact growth, innovation, business costs and flexibility.
Cloud computing has quickly changed the way that IT is built, developed and deployed. Moreover, it has been critical in supporting organisations with a focus on fast-paced digital transformation. However, to truly keep pace with their market and customers, organisations must avoid lock-in, leaving them free to select the best IT building blocks for their transformation efforts.
AWS believes that customers should have a choice to change cloud service providers (CSP) and avoid customer lock-in. We strive to maintain customer loyalty by innovating, and improving and creating new services. Customers pay only for the services they use and can switch CSPs at any time. AWS provides access to the highest levels of security, but without a large upfront expense, and at a lower cost than in an on-premises environment. AWS offers both a secure cloud computing environment and innovative security services that satisfy the security and compliance needs of the most risk-sensitive organisations.
Hybrid IT infrastructure - the only game in town? And how important is the edge going to become? The DW March issue includes a Special Focus on Hybrid IT and the Edge, with a mixture of articles and comment from leading figures working in the IT industry. Part 1.
The number of intelligent edge devices which can gather and analyse data is growing rapidly. They include consumer gadgets (smart watches and phones), IoT devices such as smart controllers for our heating and lighting and robots in manufacturing, medical diagnostic systems and autonomous vehicles – all of which might be termed ‘intelligent client mark 2’. At the same time an increasing number of organisations are moving their applications to the cloud, where intelligence resides at the centre of the network rather than with the connected end user device.
By Richard Blanford, Chief Executive, Fordway.
The question we need to ask is: how can the two co-exist and where is the best place to process information?
As always, the answer is: it depends on what you are trying to achieve. The growth of smart devices and embedded systems certainly does not mean that cloud is in decline, as some have erroneously predicted. Cloud and intelligent devices are simply the next steps in the regular waves of centralisation and decentralisation which characterise the IT sector. One moment we conclude that the best place for intelligence in the network is at the edge, for example in the PC, and then changing technology means that the most logical place for that intelligence becomes the centre. Organisations simply adapt their infrastructure accordingly. Mainframes never died when client-server came along; instead, the terminal became a PC, and the mainframe became the database server.
Cloud moves data processing to the centre to enable organisations to take advantage of economies of scale, flexibility and scalability whilst freeing up real estate by eliminating local data centres and computer rooms. At the same time budgets move from capital expenditure to operating expenditure, which is particularly advantageous for start-ups and those who prefer to use their working capital elsewhere rather than have it tied up in IT infrastructure.
However, the problem with cloud is latency. This does not affect some applications. For those which do not need to operate in real time, the need to wait for information to transit six router hops and three service providers to reach the cloud datacentre and then do the same on the way back is not a problem.
In contrast, for intelligent edge devices such as a building management system or groups of manufacturing robots, there is a need to process information in real time, and so for them cloud’s latency is becoming a major issue. They need to have data processing capability at the edge, both to minimise latency and limit the volume of data sent to cloud datacentres. For example, a robot scanning fruit on a conveyor belt in a factory and picking off substandard items needs to make instantaneous decisions, while an autonomous vehicle needs information on the changing obstacles around it.
Different sectors will require different approaches, and each will need to consider where best to place the intelligence in their network to meet their specific needs.
Organisations using intelligent edge devices can still benefit from the scale and flexibility of centralised cloud processing and storage. It is also ideal for quickly setting up a new development environment, and so can be used very effectively to develop new algorithms and to train systems using large volumes of data. The developer can then bring the intelligence to make real time decisions to the edge device to enable it to act autonomously.
Take the example of a facial recognition system. Cloud can be used to store petabytes of data so that the system can be trained with many thousands of photos. Once the algorithm has been developed, it can be loaded into the camera control system so that the initial facial recognition takes place at the edge. The system can then revert to the data stored in the cloud if further analysis or confirmation is required.
In contrast, the insurance sector also uses massive volumes of data, which is analysed by actuaries to enable underwriters to make policy decisions. Here real time decisions are not required, and the economies of scale provided by cloud processing offer significant advantages.
Even those organisations working at the cutting edge of robotics and AI will benefit from cloud’s scale and capacity. However, their smart edge devices will need to rely on inbuilt intelligence, supported by cloud services, if they are to succeed.
The inception of Wi-Fi saw the technology heralded as the go-to connectivity solution. Just a few years ago, headlines told the world that Wi-Fi was the key to ‘connecting your life.’ And with Wi-Fi put on a pedestal, many have viewed it as the pioneering technology for the advancement of the IoT.
By Douglas Castor, Senior Director, InterDigital Labs.
However, a string of advancements has led the industry to place its IoT hopes on cellular technologies. Recent research from the GSMA’s Mobile World Live revealed that only 4% of global operators surveyed believed Wi-Fi to be the right technology for the IoT, placing cellular technologies in prime position for delivering IoT services and use cases.
While Wi-Fi offers cheaper deployment for IoT development, the capability of cellular technology appears to have stolen the spotlight.
Cost is important, but capability is crucial
As with all new technologies, cost plays an important role in the IoT’s development. Typically, the more expensive the technology is to develop and deploy, the less industry backing and uptake it is likely to have.
This is where Wi-Fi has the upper hand, as cellular technology is the more expensive connectivity option. Between the cost of purchasing spectrum to network upgrades and maintenance, ensuring that a cellular network is meeting the demands of subscribers can be a costly feat.
Wi-Fi presents a viable, less expensive alternative. Unlike cellular networks, Wi-Fi networks are (for the most part) unmanaged, and as such cost little in terms of maintenance and upgrades. Moreover, with Wi-Fi infrastructure already in place and 4G networks almost buckling under pressure to meet growing demands from subscribers, it’s easy to see why many would believe that the use of Wi-Fi seems like operators’ best option from a cost perspective.
But while cost is of course important, capability is crucial. Ultimately, the connectivity technology that underpins the IoT needs to be stable; it also needs to support huge volumes of data traffic; it needs to be secure; and it needs to have little to no latency. And Wi-Fi, for all its cost benefits, is simply unable to meet these criteria and support the IoT.
Going the distance
The advent of 5G presents an exciting opportunity for the IoT development. 5G comes with the promise of better spectrum utilization and better coverage. For mission-critical IoT applications, this additional coverage will significantly bolster the proliferation of IoT use cases, with cellular availability reaching areas where Wi-Fi coverage is missing. Take smart cities for example: the ability to transmit large amounts of data in real-time to connected devices—be it connected cars, buildings or traffic lights—will be key to smart city success.
And Wi-Fi simply isn’t reliable enough when it comes to this kind of universal, undisrupted coverage. As mentioned previously, because most Wi-Fi networks are unmanaged, a consistent and reliable service cannot be guaranteed. In-building Wi-Fi can be a great connectivity technology, because there are a limited number of connections, and data doesn’t need to travel far. But Wi-Fi networks, when crowded, will throw devices off a network—which can be catastrophic when it comes to mission-critical services such as connected cars.
That’s not to say that 5G doesn’t come without its limitations, too. Achieving Gpbs burst data rates is becoming possible on Wi-Fi networks, but that same data rate will be costly on a cellular network – at least until 5G millimeter wave is fully deployed. But many of the IoT use cases do not require such large throughput, and cellular technology now supports a wide range of spectrum options that can be selected for the individual IoT uses. Lower frequency bands tend to support lower data rate, with higher coverage, while upper bands such as millimeter wave hit Gpbs burst rate but suffer from coverage problems. Obstacles such as buildings or even trees will seriously hinder its capabilities. While there is research and development currently under way that focuses on the use of licensed and unlicensed cellular to help overcome some of these challenges, operators do need to think carefully about how and where new cell sites are located.
The added extras
Future IoT use cases are also dependent on standardization. Not only does standardization bolster interoperability between devices and networks, it also promotes greater reliability, security and data privacy.
Recent data scandals and cyber-attacks have placed the protection of data at the top of operators’ agendas. While using Wi-Fi instead of cellular networks may prove beneficial financially, operators cannot fully secure or manage Wi-Fi networks, and as such, risk placing IoT data at risk of hacks. While IoT devices are unlikely to contain sensitive information, they can become the backdoor into an individual’s home or life, or a business’s corporate assets, especially if a network is poorly secured. For standardized cellular networks, the likelihood of hacks is far less likely, and what’s more, operators can guarantee network reliability by having clear visibility of network subscriber use.
Cellular emerges triumphant
There’s no doubt Wi-Fi has helped the IoT’s development, but cellular network advancements mean 5G will offer new capabilities and cost-efficiencies that were previously unimagined. If the telecoms industry is to move the IoT along, then it needs to address Wi-Fi’s inability to provide the coverage and security that will be crucial. If operators can leverage existing 4G and future 5G networks to overcome Wi-Fi’s coverage and security limitations, the IoT dream will become a reality sooner than we think.
Businesses increasingly rely on complex technologies to power their day-to-day business operations. These complex solutions can require highly skilled and supportive expertise, forcing many organisations to spend large amounts of time and effort on the maintenance of basic daily tasks.
By Iskandar Najmuddin, Lead DevOps Consultant, Rackspace.
Automating processes is time consuming, and as a result, businesses are finding their ability to innovate increasingly restricted. Furthermore, the demand for a highly skilled workforce to support day-to-day business activity is putting pressure on an already overstretched industry, with the situation set to worsen – the British Computer Society has warned that the number of students pursuing a computing qualification could halve by 2020.
Unsurprisingly, cloud technologies and the skills surrounding their implementation are seeing a huge increase in demand. According to a recent study from IDC, nearly all businesses now rely on an infrastructure that uses “multiple private and public clouds based on economics, location and policies”. With businesses now potentially working across several different clouds at the same time, the scale of implementing and managing disparate environments is huge.
It is now vital that organisations quickly acquire the expertise to plug the IT skills gap, whether through upskilling current staff or relying on external recruitment. These evolving needs are changing the IT skills landscape rapidly, as demands faced by businesses dictate which skills are in vogue.
Containing cloud complexity
Container technology comes with an ecosystem that helps simplify the growing business operations that are hosted within cloud infrastructures. As a technology that has experienced a recent surge in interest, many organisations are using containers to simplify the growing complexity of enterprise infrastructure. The rising demand for containers, which in 2016 generated $762 million in revenue, is forecast by 451 Research to reach $2.7 billion in 2020, proving its growing popularity.
As executable packages of software, containers include everything they need to run themselves: code, runtime, system tools, system libraries and settings. This is what makes containers incredibly beneficial. In short, containerised software will run predictably regardless of where it is used, even in different clouds. For businesses that span across different computing environments, consistency, predictability, isolation, and portability is far easier to achieve, alleviating some of the immediate pressure for a grand-scale, upskilled workforce.
Containers and more
Projects such as Docker, an open-source tool that provides abstraction, automation and APIs to make containers easy to build, manage and deploy, have seen a sharp increase in interest in light of the demand for containers. Using ITJobsWatch, we found that the number of permanent jobs citing Docker have doubled in the six months running up to February 2018, with no signs of slowing down. This mirrors the results of Rackspace’s IT skills analysis in 2016 which revealed that business’ desire to stay on top of new tools and their constantly changing features is part of the reason why the demand for professionals with Docker expertise grew by a considerable 341 percent between 2015 and 2016.
Docker isn’t the only tool that has been driven by the increase in container demand: Google’s Kubernetes project has seen meteoric growth in demand for engineers, increasing by 919 percent since 2016. As a platform for automating the deployment, scaling, and operations of application containers across clusters of hosts, Kubernetes uses Docker (along with other container runtimes) to both scale enterprise services and ensure they are always relevant.
However, the gap between the expertise available to implement these technologies, and the rate at which professionals are acquiring relevant skillsets is quickly widening. The sudden increase in demand for these skills has created a gap between the expertise available and the rate at which professionals are acquiring the relevant skillsets. Thankfully, there are a number of ways that cloud engineers and developers can work towards learning these skills.
Keeping up with demand
Whilst trying to maintain business operations to required demand, many organisations are discovering that they need talented individuals who can work on multiple platforms and computing environments. Hiring new staff is the typical solution to such a problem, yet the growing IT skills gap is making this increasingly difficult to action. There are, however, a couple of different approaches companies can take:
A solid, universal knowledge of containers is particularly difficult to achieve. However, the container ecosystem is evolving at such pace that any detailed curriculum would soon be out of date. This explains why, at this moment, upskilling and outsourcing talent remain the best methods to keep up with the demand of containers.
Upskilling and outsourcing are only temporary solutions to an ever-growing IT skills crisis. The widening skills gap in technology must be addressed by technical stakeholders and businesses with great urgency. Those who don’t foster talent will ultimately be out-paced, out-innovated and ousted from the market by competitors who do keep pace with the change.
It is easy for engineers to become irrelevant in times where demands of enterprises are changing, and when competing against staff with in-demand talents. Platforms like Kubernetes and Docker are just starting to gain traction, yet these are the foundation platforms that enable flexible Platform-as-a-Service approaches that enable creativity, experimentation and business innovation. Businesses must embrace the evolution of the tech landscape and ensure the skillsets needed are fully accessible.
Hybrid IT infrastructure - the only game in town? And how important is the edge going to become? The DW March issue includes a Special Focus on Hybrid IT and the Edge, with a mixture of articles and comment from leading figures working in the IT industry. Part 2.
‘Digital transformation’ has become one of the most widely used buzzwords in today’s technology lexicon. Despite the term being bandied about for many years now and all of the talk of a digital economy, independent data suggests the talk has yet to become action. A global study of IT professionals conducted by Ovum discovered that only eight per cent of those in the enterprise consider themselves truly digitally transformed, while 23 per cent are only in the early stages.
By Steve Miller-Jones, Vice President of Product Strategy, Limelight Networks.
One of the key considerations for enterprises undertaking digital transformation is moving their compute power to the ‘edge’ to exploit that other buzzword of our times, the Internet of Things. As businesses deploy new technologies and develop a better understanding of IoT, they’re realising the need for decentralised, distributed computing that’s performed closer to connected devices at the edge of the network. By taking the processing power out of a traditional date centre or the cloud and placing it nearer the device, enterprises are reducing the latency incurred in their now data intensive digital operations.
So, edge computing has become an important component, but that isn’t to say it’s a silver bullet. As businesses strive for lower latency and increased security, they’re coming to the realisation that edge computing isn’t enough.
The problem with the public internet
For an IoT deployment to be successful, there needs to be the fast and secure delivery of critical information between the device(s) and the management system. This, in short, is everything that the public internet is not designed to do. Crowded, inefficient and significantly less secure than a private network, enterprises that want to make a success of their IoT project should steer clear of this.
Latent expectations
Since the public internet is a shared resource, it’s susceptible to congestion. While we think of the internet as a giant, inexhaustible virtual resource, the reality is that internet capacity is finite. With more connected devices and growing appetites for more complex content, consumers place more strain on this capacity, causing network congestion that significantly slows things down.
Take, for example, online video. Cisco projects video will make up 82 percent of consumer internet traffic by 2021. Video content alone already consumes a large share of internet capacity today and will only increase as more streaming capabilities are added and adopted such as video surveillance, ultra-high-resolution formats, virtual reality and augmented reality. All of this content congests the public internet, and just as more highway traffic leads to longer commute times, congestion on the internet leads to longer wait times (latency).
For enterprises deploying IoT on public networks, latency can lead to serious issues. IoT use in the enterprise often requires low latency, such as an application that’s monitoring a gas pipeline for leakages or a remote device monitoring a patient’s clinical condition. In these situations, delays could be deadly.
Securing the endpoint
While public internet use can cause serious latency issues for IoT functions, it’s also important to consider the security issues to which the public internet exposes businesses. The pervasive use of connected digital technologies across businesses has dramatically increased the potential of malicious attacks. In particular, IoT creates new security headaches as numerous devices with minimal security measures constantly connect to the internet.
Hacks that have taken place into connected cars, monitors, cameras, thermostats and more show the vulnerabilities businesses expose themselves to when deploying IoT devices on the public internet. Think of it this way: the public internet was designed for open communications, and this type of infrastructure makes it inherently more open and thus vulnerable to cyberattacks and hacking.
Putting your data in the fast lane
If businesses want to improve the efficiency of IoT-enabled communications, they need put their data in the secure fast lane. By taking advantage of a private network that bypasses the congestion and security challenges of the public internet, enterprises can create a strong foundation upon which to build their IoT and edge computing initiatives.
Having an edge presence via Points of Presence (PoPs) globally, combined with capacity and connectivity is crucial to making the adoption of a private network a success. In meeting these demands, many enterprises are realising the role that content delivery networks (CDNs) can play in supporting efficient edge computing. CDN workloads are primarily egress, meaning that the data is sourced from few origins, but delivered to many endpoints. Meanwhile, many edge compute and IoT workloads are primarily ingress, in that data is collected from many endpoints. This natural synergy allows the data processed in edge compute workloads to be immediately hosted, removing the need to develop additional communication infrastructures. These elements combine to create a strong support system, while the private framework isolates traffic flows away from the congested and vulnerable public internet.
Consider what an IoT application of edge computing might look like in an industrial setting. In the earlier case of the gas pipeline, the application can help identify potential issues considerably quicker than would have been previously possible. Taking advantage of this increased speed of action requires having the infrastructure in place to quickly process massive volumes of video and image data to quickly detect and share alerts on leakages or other anomalies. Edge computing on the actual pipeline infrastructure augmented with private network connectivity provides a faster, more secure means for IoT devices to process this kind of vital information.
Turning digital transformation hype into business reality
If businesses are to successfully convert hype into business impact, then they need to make next generation digital infrastructure a priority in 2019 to support their digital transformation efforts. The public internet simply can’t meet the demands of IoT devices when it comes to low latency and high security. Having the strong supporting network infrastructure that delivers low latency and high security to effectively manage and deliver the high volumes of data that are the lifeblood of digital transformation projects has to become a critical business objective if edge computing initiatives are to succeed.
In the last five years, we've seen a curious phenomenon play out in the business world: companies have spent more and more on security, yet data breaches continue. Organisations need to realize that this growing challenge affects the entire business world, and learn from incidents that have plagued other companies.
By Ram Vaidyanathan, Product Manager, ManageEngine.
“What I did 50 years ago is 4,000 times easier to do today because of technology,” says Frank Abagnale, 70-year-old FBI security consultant and former con man. His exploits as a check forger and impostor in the 1960s were showcased in the 2002 film Catch Me If You Can. Back then, it took a lot of preparation to complete a mission-based, malicious, and catastrophic attack. Today, while we may be better equipped to defend against attacks such as Abagnale's that were far ahead of their time, we're now worse off because of the number of vulnerable points a cybercriminal can exploit.
Some attacks from the last four years
Let's look at attacks at five different organisations: SingHealth, Google, SunTrust Bank, Cosmos Bank, and JPMorgan Chase.
SingHealth: SingHealth is Singapore's largest group of healthcare institutions, serving around 3.8 million patients each year. Between June 27 and July 4, 2018, a security breach compromised the personal data of 1.5 million SingHealth patients in what became Singarpore's biggest cyberattack ever. The attackers accessed patients' sensitive information, including their name, gender, identity card number, address, race, and date of birth. Furthermore, prescription details of 160,000 patients, including those of prime minister Lee Hsien Loong, were stolen.
The initial breach was due to malware that was inadvertently downloaded by a front-end employee through a malicious website or phishing email. The malware allowed the attackers to obtain this employee's account credentials, through which the attackers could access all the applications this employee had access to. From there, the attackers could lurk in the network and sniff out particular servers, including domain controllers, that stored all authentication information. Then, they gained privileged access to the patient database.
Google: Anthony Levandowski worked in Google's autonomous car division until January 2016, when he left to found Otto Motors. Just seven months later, Otto was acquired by the transportation network company Uber. It has been alleged that just before his exit from Google, Levandowski downloaded 9.7GB of confidential files and design trade secrets.
The charge from Google was that, as a user with privileged access, Levandowski had the permissions to carry out the breach; Levandowski also attempted to cover his tracks after the deed was done. Uber finally settled with Google out of court for USD 245 million.
SunTrust Bank: On April 20, 2018, SunTrust Bank, a large, US-based bank holding company, revealed that a former employee tried to steal information—including names, addresses, phone numbers, and, in some cases, even account balances—of 1.5 million clients. It was also alleged that this former employee tried selling the data to a criminal party.
While details on how the former employee gained access have yet to emerge, the breach itself is not suprising. Kamalakannan Subramani, manager of IT services at Zoho Corporation, says, "Even some larger corporations fail to take adequate measures to deprovision accounts of former employees. Proper deprovisioning can occur only if proper provisioning was done in the first place. Otherwise it's easy to miss."
Cosmos Bank: Cosmos Bank is a 112-year-old cooperative bank in India, with deposits of more than INR 156 billon (USD 2 billion). Between August 11-13, 2018, the company likely fell victim to an attack carried out by the Lazarus Group of North Korea. Attackers probably gained an initial foothold through spear phishing. From there, the attackers targeted the bank's ATM infrastructure.
Under normal circumstances, a cash withdrawal request from an ATM would reach the bank's core banking system for authentication. However, the attackers created a proxy switch which authenticated each of their fraudulent requests. The end result? Close to INR 9.4 billion (USD 13.5 million) was siphoned off through ATMs from 28 different countries.
JPMorgan Chase: The IT security team at the American bank JPMorgan Chase discovered a major data breach in July 2014. The names and email addresses of more than 70 million customers were stolen. The criminals initially waged a phishing attack to obtain employee credentials. At the time, JPMorgan Chase had two-factor authentication deployed in almost all of its servers, except for one server used by a third-party company. All attackers needed was this simple but costly oversight to gain access to JPMorgan Chase's infrastructure.
So how do you defend against all this?
Companies need to understand the modus operandi of cybercriminals. Once a cybercriminal gets into a company network, they may spend a long time trying to escalate privileges and move laterally before completing their mission. Some of the ways attackers gain an initial foothold are through phishing emails and malicious websites. Once the attacker gets into the network, they may employ tactics such as port scanning, token theft, pass-the-hash, and sometimes even social engineering to move laterally.
It may be months before any overt activity even occurs; in the meantime, the attacker could be just lurking around gaining more and more privileged access and making their presence a normal occurrence. They may even access certain classified files and folders, but at a rate that will not arouse any suspicion. Attackers may also try covering their tracks once they exfiltrate data. Some ways attackers attempt to hide their activities are by clearing event logs, disabling auditing, or sometimes a combination of both.
Therefore, enterprises need to shift their mindset from relying solely on perimeter protection to emphasizing vulnerability detection. A good way to start would be to do an exhaustive risk assessment and plug all holes.
Organisations also need to test their ability to prevent, detect, respond to, and contain an attack. This can only be done if the IT team assumes that an attack will definitely happen, and runs through real simulations. The organisation may also find it worth their while to employ an ethical hacker to help them with this. An ethical hacker who goes by the moniker of Freaky Clown says, "I have legally broken into hundreds of banks, and I have only been caught two times and that too because of the client's mistake."
Finally, companies need to invest in the right threat intelligence systems, systems that can correlate different network anomalies. But this alone is not enough. The insight provided by these systems should in turn be correlated with user behavior analytics (UBA). UBA uses sophisticated machine learning technology and an analytical approach to create a baseline of normal activities that are specific to each user, and notifies security personnel when there is a deviation from this norm.
What will happen in a highly-digitised future?
In the future—as technologies such as smart devices, augmented reality, and the Internet of Things become common—the number of vulnerable endpoints in a typical organisation will increase. As if those won't be difficult enough to contain, cybercriminals may also start exploiting these vulnerable endpoints by employing artificial intelligence. Machines could be taught to infiltrate and perform malicious attacks on their own and bring organisations to their knees.
Imagine a situation in which a self-learning machine turns a driverless automobile into a weapon. Or picture a future in which a combination of holography and brain decoding technology allows people to have meetings between their virtual selves in the office. What if a cybercriminal impersonates a CEO’s virtual self and compromises the business by giving the wrong instructions during a meeting?
In scenarios like this, organisations would need to detect deviations in behavior with highly-sophisticated AI tools of their own. And these AI tools would just be a single, yet important part of a highly-layered and tight defense strategy.
Hybrid IT infrastructure - the only game in town? And how important is the edge going to become? The DW March issue includes a Special Focus on Hybrid IT and the Edge, with a mixture of articles and comment from leading figures working in the IT industry. Part 3.
Aricent recently developed an Open Source edge computing platform for Deutsche Telekom. Here, Shamik Mishra, the AVP of Technology and Innovation at Aricent, who worked on that project, answers some questions about the edge
James Nesfield, CEO of Chirp, comments:
"The hype around the edge has certainly intensified as companies look for new ways to efficiently process ever-growing pools of data. Now, we are seeing a general shift towards more compute - specifically AI - being enabled at the edge device, so more processing is being decentralised and pushed out to edge devices. In part this is in order to cut down on data throughput demands, and reduce latency. As this shift happens, and edge devices have more processing available to them, the need for them to be well-connected increases. So we are seeing new emerging connectivity standards coming out to support the increasing capabilities of edge devices, such as LiFi, Sigfox, NB-IoT, and data-over-sound.
"Security, especially from a physically, geographically managed end-to-end perspective - edge, fog, cloud - remains unsolved in a comprehensive, consistent manner.
"As businesses kickstart their digital transformations, we will see the current trends towards more AI and data aggregation at the edge, and increasing demand on existing connectivity networks, continue. When you’ve got more data and more processing happening at the edge, the requirement for easy and reliable connectivity between those devices - no matter what technology they are using to connect and share data - becomes paramount.
"In particular, ongoing advances in both hardware and machine learning technology now enable voice to be processed at the edge, increasing the possibilities for instant human-machine interactions. Modern voice systems are a great example of this - consider the ability for speakers and microphones to listen and respond to environments. For example, Audio Analytic has developed audio recognition software that could allow an Amazon Alexa to detect the sound of breaking glass or a break-in in the home. Similarly, Rainforest Connection uses mobile phones in the Amazon rainforest to listen for logging activity and send a text alert to authorities who can determine if it’s illegal and then stop it. Audio is a really underutilised medium at the edge today, and there are a whole range of fascinating use cases for it."
Falk Weinreich, Senior Vice President at Colt Data Centre Services, believes that edge computing is the answer to addressing latency issues:
From streamlined processes to improved operational efficiencies, the evolution of Industrial Internet of Things (IIoT) and the growing popularity of AI adoption has driven considerable benefits for businesses. The global uptake of these technologies has had significant knock-on effects on network providers and their operations.
Data centre providers traditionally embraced a centralised cloud model for their data processing needs. From storage and computing, to processing and analytics, these operations took place in a centralised data centre. Often times, these data centres would be in a location that is different – and physically far – from the location of data generation. As businesses increasingly embark on a machine-led industrial journey, the centralised cloud model has swiftly become outdated.
Relying on the traditional cloud model can slow processes down and give way to increased disruption. Consider the demands of autonomous fleet trucks that depend heavily on accurate GPS capabilities and tracking devices. The only way they can ensure they are operating at peak-efficiency is by relying on low-latency data transmission. Even a millisecond’s delay in navigation information could lead to catastrophic traffic consequences.
As such, it is no surprise that latency has become a critical criteria for businesses when choosing their data centre providers.
The way to achieve is through embracing edge computing. Moving data and applications to the edges of a network narrows the distance between users and data, resulting in improved speed, reliability and efficiency. This allows data centres to generate, process and analyse large volumes of data in a fast, stable and consistent manner, while businesses are able to maintain operational run-time effectively.
Considering the benefits, it is clear that the future of our society lies at the edge.
By Jason Collier, Co-founder, Scale Computing.
The IT landscape is constantly evolving and the last few years has seen many technologies move from concept to product, from slow and time consuming to fast and efficient, from big and clunky to small and space saving. Today’s IT landscape is all about moving towards the micro data centres, as the demand for edge computing pushes through the need for quick and instant access to data.
Edge computing is localised computing for systems that operate away from the primary data centre. Typically, these systems rely on connectivity and reliable performance – take a self-driving car for example, you would not want there to be any delay in the system deciding to brake or accelerate. It is exactly this reliance on connectivity and performance for real-time diagnostics that means cloud computing is not always a good fit.
With the proliferation of IoT and AI, and as more devices connect to the internet and to each other, edge computing is at the top of the boardroom agenda for IT. So, as we move closer to the edge, here are four common myths about edge computing…BUSTED.
1. Edge computing is a brand-new technology
Edge computing is most definitely not new – even history shows us that if you want to predict the next technology, look at what is already being used.
While self-driving cars are very much still in development, edge computing is currently solving an age-old challenge – reliable and good computing power for the business spread out across multiple locations. In other words, the remote and branch offices (ROBO). For organisations including retail shops, oil rigs, transport like planes or trains, manufacturing and medical facilities, edge computing is providing a service that is essential to their day-to-day operations – consistent and dependable access to data.
Essentially, edge computing is any computing that takes place outside your data centre, away from your IT staff. It could involve only a few remote sites or it could be hundreds or thousands of sites, such as retail locations. These sites could be across town or around the world. Regardless of the distance, they all have the same needs and requirements including disaster recovery, remote management and high availability.
So, it is not new, just reinvented. Edge computing is certainly a new term, but it’s bringing back to the agenda what many distributed enterprises have long looked to achieve.
2. Edge Computing replaces the main, centralised data centre
We should think of edge computing as an augmentation of the data centre being driven by IoT, not a replacement of it. There are more devices connected today than there ever have been in history and, while edge computing is required to made these devices work the way they were intended, it is not able to facilitate this without a centralised data centre.
This is why edge computing is so beneficial for ROBOs - these distributed organisations face huge challenges in working away from the main office across multiple devices, while still being required to operate efficiently and effectively – with edge computing businesses can benefit from real-time data analysis and enhanced AI and IoT in remote locations.
3. Edge computing is expensive
Everything can be made to be expensive, but the bottom line is, edge computing does not need to be expensive. Because edge computing is a compute source away from the main data centre, its resources can be kept to a minimum. For example, in a hospital it might be a piece of medical equipment, in a retail shop it might be a cash register, and for the ROBO it might be a single IoT device. It is flexible and most definitely cost effective if supported by the right technology. Opt for something that is cost-effective, scalable and reliable.
4. Edge computing is complex and difficult to manage
Edge computing constitutes anything outside of your centralised data centre size – say anything less than 6 racks. So, if the system itself was complex and difficult to manage it would simply fail.
The reason edge computing works for the ROBO business is precisely because of its easy and minimal management requirements. Take retail for example – each individual supermarket does not have an inhouse IT team, at best maybe there is one IT manager but often there is no one at all. But there are dozens of cash registers, barcode scanners, self-shop scanners, alarm tags, de-activating alarm sensors – the list goes on. The edge computing system is designed with simplicity at its heart so that the management it requires is minimal. This means, in organisations like supermarkets, the team on hand can focus on excellent customer service rather than IT management.
So, we have now busted the four myths on edge computing. It’s a new term but not a new concept, and with it, it’s important to understand how it works and the benefits it can bring. Edge Computing is definitely emerging as a hot topic at the moment, and it can help modernise traditional centres, but it’s important to understand exactly what edge computing is.
With nearly half of respondents of PWC's 2018 Global Consumer Insights Survey having already purchased or planning to purchase an artificial intelligence (AI)-enabled device like Google Assistant or Amazon Alexa, it is clear that there is a growing acceptance, and even approval of human-bot interaction. And this trend is only set to continue, with more and more companies adopting a hybrid workforce model, made up of human workers assisted by digital colleagues who can perform some of the everyday, rote tasks that humans don’t want to do.
By Martin Linstrom, Managing Director UK&I, IPsoft.
The IT service desk is one area posed for great transformation by digital colleagues, promising a future with no more tickets. IT teams play a crucial role in maintaining an organisation’s security and finding new ways to innovate processes and operations by leveraging new technologies. AI and automation present an immense opportunity to eliminate these teams’ mundane, repetitive tasks and reduce the number of handoffs between customer service agents by giving employees access to self-service portals, through which they can resolve their own IT issues.
In fact, according to Forrester’s 2019 predictions, over the coming year more than 40% of firms will create digital workers powered by AI and robotic process automation (RPA). What’s more, analysts predict that automation will eliminate 20% of all service desk interactions through the integration of cognitive systems, chatbot technologies and RPA.
Translating technical issues
Many enterprises are already looking at investing in AI technologies to provide better internal IT support for their staff. Large businesses are looking to automated ticketing systems, such as Jira, to provide a quick and easy service for basic admin tasks, like password resets and WiFi access codes. Basic chatbots can answer many of these simple support queries and have proven effective in speeding up IT ticket resolution. However, their inability to solve more complex tasks, address multiple problems at the same time, or help those that don’t known the correct terminology means that human IT service consultants must still remain on-hand for handoffs, via the phone or a messaging app, adding further delays and complications to the resolution process.
Indeed, implementing automation or basic chatbots tactically can in the short-term reduce long user wait times, as well as release some of the burden from overworked IT staff. However, these solutions do not have the ability to effectively transform a company’s internal operations nor can they always respond to every request.
The problem is that most employees outside of the IT team have no clue how IT service desk management works, in particular they don’t speak the language of backend operating systems. Rather, end users speak in unstructured natural language, while backend systems can only process structured data and inputs, limiting the ability of non-IT savvy staff to solve a complex problem themselves.
So how can we bridge the gap between an end users’ IT issue and a speedy resolution, without a human middleman (IT support teams) and ticketing systems? By using natural language programming, unlike basic Q&A chatbots, autonomic systems can quickly take an end user’s natural language request and translate it into a structured form that the backend systems can understand and execute against automatically.
Faster requests and easier resolutions
Every organisation needs an end-to-end automation backbone that unifies its existing IT systems, a sort of one-stop shop platform for all employees’ needs. Just like the central nervous system in the human body, which manages vital organs subconsciously, organisations need an autonomic system to take on complex business tasks with limited human oversight (everything from HR to IT to virtual infrastructure management). Enabling an end-to-end automation backbone could help businesses increase productivity while limiting overhead. These savings can then be reinvested elsewhere in the business.
Consider what companies could achieve if we eliminated that gap between front and back offices altogether. Through the layering of cognitive intelligence on top of an automated backend, which translates the user’s natural language requests into structured IT service management tasks, IT problems could be solved in an instant. The need for an intermediary queueing system, such as ticketing, is removed and replaced with an AI-powered system that is designed to automatically scale to volume of demand.
Rather than filing yet another IT support ticket, when experiencing technical issues, a frustrated employee would interact directly with a digital agent via their preferred means of communication: through text or voice. IT service desk management is moving from a convoluted multi-channel process to a more simplified two-channel process that involves no intermediaries unless truly necessary, so that IT tickets can be resolved more quickly or removed altogether. In effect, what we’re describing here is on-demand IT services, utilising the power of RPA and cognitive technology.
New year, new IT systems
Enterprises should resolve to phase out ticketing support systems that are a time-waster and barrier to productivity. Think of intelligent automation as a self-improving technology: it doesn’t need to be replaced as it ages, and it doesn’t rely on a vendor to make constant upgrades in order to improve. What’s more, the same system can be implemented across multi-national organisations thanks to the instantaneous translation capabilities inherent in AI-powered digital colleagues. These characteristics are what separate intelligent automation from standard applications of AI, and indeed a strategy that encompasses uses of both can deliver powerful results.
Starting the new year, perhaps this is something to consider as your business examines where to dedicate future investments. Businesses and IT teams should pledge to serve their end users with the greatest efficiency by taking a simple phrase and making it a reality: no more tickets.
Hybrid IT infrastructure - the only game in town? And how important is the edge going to become? The DW March issue includes a Special Focus on Hybrid IT and the Edge, with a mixture of articles and comment from leading figures working in the IT industry. Part 4.
By Stephen J. Morris, Senior Product Manager, Panduit EMEA.
The data centre is here, it’s just not evenly distributed. The drive for lower latency is fuelled by enablement of technologies such as 5G – which will push data centre’s closer to the edge, meaning compute power, storage and data centre facilities will need to physically reside closer to where those services are demanded
Applications such of driverless cars and augmented reality will need to be supported by systems that deliver sub 1ms of latency, end to end link, and this will transform the data centre landscape as we know it today.
Data centres are set to get physically larger and more bandwidth hungry. Mass introduction of small wireless cells to support 5G will be, in the main, linked with fibre cables – propelling the need for high density fibre solutions from the ‘macro’ cell site, where small cells are aggregated, right back to the supporting data centres.
We are poised for gigantic growth of data traffic transacting within the data centre. Known as ‘machine to machine’ (M2M) or ‘East to West’ communications, M2M is predicted to grow by 44% CAGR between 2015-2020 (source: Cisco). Partly meeting this development, 100G is forecast to represent over 50% of data centre optical transceiver transmission capacity by 2019 (source: Infonetics). Additionally, 400G deployment will gain momentum in the coming year and is forecast to accelerate at great pace, representing the majority of Ethernet port shipment by 2021. Developments in 400G bring significant value in terms of cost saving, 4x the density per RU and 4 x scalability. These innovations will also drive 2x power performance efficiency - making a positive contribution towards energy reduction targets that ‘Edge’ sets out to achieve.
Single Mode Fibre or Multi Mode Fibre?
In the past, selection between single mode fibre (SMF) and multimode fibre (MMF) solutions were relatively defined – the former provided highest bandwidth at a higher price, the latter a cost-efficient alternative in less speed and reach sensitive use installations.
Today critical factors in decision making include determining whether current bandwidth requirement can be economically met; reach/distance requirements including foreseen network architecture specific goals achieved; and whether future Ethernet and/or Fibre Channel speed can be supported or easily migrated to over the return of investment (ROI) period. These four inter-dependent criteria and must be considered to reach the optimal choice.
With seemingly infinite bandwidth single-mode fibre (SMF) has often been considered the best policy by network designers and data centres to ensure a future without bandwidth bottlenecks. Traditionally there have been high costs associated with SMF optical systems, this cost addition is attributed to the price of the optical transceiver modules and overall cost is less influenced by the price of the passive optical cabling infrastructure.
At the same time, it is acknowledged that despite the cost penalty, SMF technology does indeed hold advantages over MMF in terms of longer reach and bandwidth capability. But as companies embark on a transition towards high-speed data transport do alternative fibre cabling systems offer capabilities that previously only SMF could satisfy?
At this point, it is important to acknowledge that the paradigm has recently shifted in terms of decision making when determining which grade of fibre provides the best return on investment in your particular environment. Technological advance now places MMF at the top of the list in terms of making an informed choice for both present and future network needs.
Is the MMF data pipe large enough to support future generations of traffic?
Multimode fibre supports a large proportion of today’s applications and reaches in the data centre at significantly lower cost. Additionally, it has the capacity to meet future data capabilities of data centres with a roadmap that can feasibly support up to 800G Ethernet.
The bandwidth capabilities of MMF has grown exponentially with advances in optical transmission technology. More recent MMF transmission developments have seen the introduction of faster vertical cavity surface emitting laser (VCSEL) moving from 10G to 25G, doubling of the line rate (PAM4) and shortwave division multiplexing (SWDM).
With 40/100GE already being deployed and objectives for 25/50/200/400GE defined by the IEEE, Ethernet is considered the leading networking platform. It can deliver data centre architecture to meet future applications needs whilst providing low operating costs today.
Outside of the IEEE, the SWDM Alliance multisource agreement (MSA) sets out to enable 100G over two MMF’s (duplex transmission/fibre pair), supporting transmission over OM3, OM4, OM4+ and OM5 grades. Inability of SWDM MMF solutions to support ‘break-out’ mode (discussed later), is a limiting factor in terms of wide adoption of this technology. Further, it is anticipated line rates for MMF fibre will accelerate from 50G to 100G in the near future, meaning more efficient MMF applications (fewer fibre pairs required) and widening scope for MMF ‘duplex’ and ‘parallel’ optics, utilising a single wavelength vs narrowing scope for MMF applications requiring multiple wavelengths.
Short reach applications, such as switch to server and M2M interconnections is where multimode fibre adds real benefit inside the data centre. Industry standard fibre which is laser optimised now provides a scale of fibre capabilities to assist data centre operators to plan successful infrastructure installs and upgrades dependent of operational requirements.
Application driven decision making
The rise of software-defined networking (SDN) has caused data centre designers to realign from traditional three-layer topologies (Figure 1) to ‘spine and leaf’ architecture (figure 2).
Collapsing the 3 three-layer architecture, to spine-leaf, drives higher port/connector density at the spine as every leaf is connected to every spine switch. A popular practice to resolve higher density connectivity challenges and increase efficiencies at the spine is to use cable assemblies and equipment that can operate ‘break-out’ configuration (see figure 3).
Breakout application drives early adoption of high speeds in the data centre.
Taking 40G MMF data centre deployments as an example, over 50% of 40G market adoption was believed to be related to ‘breakout’ configurations i.e. 40G to 4x10G.The key driver was not for 40G end-to-end, but to deliver efficiencies in distribution of 10G.
The same concept of continuous incremental efficiencies will drive the growth in 100G (4x25G) and 400G (4 x100G or 2x200G). However, it should be noted that the early generations of SWDM over MMF are unlikely to support ‘break-out’ mode/ configuration. This should be considered when evaluating the benefit of SWDM’ vs single wave length high speed transmission and network architecture design.
Example of the efficiencies a solution operating in ‘breakout mode’ can deliver (estimates only):
Cable Reach
Hyperscale data centres are typically very large facilities that house >50,000 server ports. The reach (distance between server/switches) in such facilities often exceeds 150m. With such long reaches it is not unusual to see a mix of 80:20 in favour of single-mode fibre. The opposite can be said for Enterprise data centres, where typically 95% of reaches are <100m and the mix is more likely to represent 80:20 in favour of multimode fibre.
There are advanced grades of optical fibre that offer extended reach MMF far beyond the standards, likewise enhanced transceiver specification to achieve the same end. The chart (see Table 1) gives examples of link distances based on a standard (non-extended reach) transceivers.
Cost as an Incentive
MMF transceivers are manufactured with lower cost VSCEL compared to the higher cost lasers typically seen in SMF transceiver optics. These VCSEL take advantage of the larger MMF core/ cladding diameter (50/125 micron - OM3, OM4, OM4+ & OM5) making the overall solution a lower cost option.
The structured cabling element (which includes fibre cabling) of a network installation typically represents <5% of an IT manager’s budget. Whilst the cost of the fibre grade itself (SMF or MMF) represents a very small percentage of the cable system make-up when considering each fibre channel/ link. Taking the example of a 10G MMF channel and adding the cost of the transceivers to the passive fibre cable infrastructure, the cost of the transceivers would represent circa 80% of the total cost.
There is a desire to simplify and reduce the cost of single-mode transceiver optical modules, this may come with compromises, an example is IEEE 400GBASE-DR4, which has a reach objective of 500m – this is shorter than the >Km typically seen for SMF solutions today.
The gap between MMF and SMF is expected to narrow in future as faster optoelectronic technology matures and volumes increase, however, as a solution today, the price premium for SMF is typically 1.5x to 4x that of a MMF solution depending on data rates/transceiver module type used.
Conclusion
Single-mode fibre continues to play an important role where reaches extend beyond 400m (up to 10G) and typically beyond 150m (40G data rates and beyond) which can include applications across server aisles and across the site.
The introduction of 100G-BASE-PSM4 (parallel single-mode) brings with it a new breed of SMF transceiver, incorporating low cost silicon photonics. This is expected to drive closer price alignment between MMF and SMF solutions, thus, increase popularity of SMF – particularly for greenfield 100G deployments.
However, there is a significant pre-existing installed base of MMF (OM3, OM4, OM4+) in data centres and legacy network expansion represents circa 50% of market requirement. Both equipment manufacturers and customers have a vested interest in supporting and maximising the return on Investment (RoI) of this legacy infrastructure. Therefore, a mass migration towards SMF in brownfield sites is not expected anytime soon.
In any event, the bandwidth of today’s multimode fibre solutions far exceed their perceived limitations. This being the case, bandwidth should no longer be the barrier for consideration of MMF to meet current or future needs. In addition, MMF is the most cost effective choice for the majority of data centre uses where reach is typically <150m. The business case for MMF continues to strengthen as the applications, bandwidth and roadmap it supports extends.
Acknowledging data centres are expected to physically grow in size (see figure 4) and that, for both present and future multimode applications a key limiting factor in terms of performance, in particular, reach, is the fibre’s ability to address model/chromatic dispersion interplay, the time to consider more advanced fibre grades (that address interplay) such as Signature CoreTM OM4 (OM4+) and Signature Core OM5 (OM5+) is… NOW.
OM4 Signature Core (OM4+) offers the best performance for the overriding majority of applications, far outperforming OM5 in most cases.
It is no longer enough to simply pick the next generation of MMF fibre in the belief it returns best value – best value lies in [a] understanding today’s and tomorrow’s application needs and [b] acknowledging the path that advance modulation/line rate is expected to follow.
Panduit has launched a White Paper: Light into Money – The Future of Fibre in the Data Centre Network download a copy here https://pages.panduit.com/light_to_money.html
Best practice works, as it turns out. That’s what DigiCert’s latest State of IoT survey found.
By Mike Nelson, VP of IoT Security, DigiCert.
It needs no mention that the IoT is exploding. As excited as many are about the possibilities of this brave new world, many don’t pause to consider the ubiquitous device vulnerabilities and massive expansion of attack surfaces that an IoT implementation drags along with it. We wanted to know if enterprises were really grasping the implications - good AND bad - that the IoT presents.
In pursuit of answers, we surveyed 700 organisations all over the globe, and what we found was a decisive vindication of IoT security best practices. 80 percent of the respondents listed security and privacy as their top concerns related to IoT. We discovered large variations in the maturity of approaches being used to address security. Some organisations were successfully deploying security solutions while others were still struggling. So what sorts out the top tier companies from the bottom? The IoT secure from the insecure? According to our results, that difference comes down to five points:
Those that followed these steps were largely safe from IoT-based threats. The organisations that struggled to follow these steps were not.
While only 23 percent of the top performing organisations suffered an IoT-related security mishap, every bottom tier enterprise experienced at least one. Here, 83 percent of the bottom tier reported that they experienced data breaches and 61 percent reported Denial of Service (DoS) attacks. In fact, those that have failed to heed best practices are 6 times as likely to undergo an IoT-based malware or ransomware attack, 12 times more likely to experience an IoT-based DoS attack and 16 times as likely to experience an IoT-based data breach. Wow! Those numbers should be shocking to all of us.
Moreover, when those breaches did happen, they were all the more grievous. 61 percent of bottom-ranking organisations experienced productivity loss and monetary damages - with many reporting losses in the millions. Nearly half, 44 percent reported a hit to their stock price or compliance penalties, deepening the wound to their bottom line. In cash terms, those losses aren’t to be balked at, as 25 percent of the bottom tier reported a loss of at least £26 million over two years. Others reported reputational damages which, in the long run, could end up being more expensive than any fine or bill for mitigation costs. That may only get worse with the arrival of the European Union’s General Data Protection Regulation, which in many cases will require breached companies to publicly disclose their breaches and threatens crippling penalties for those who don’t.
These lamentable results didn't stop there but went deeper into business closure, harming personal careers and even criminal prosecution.
As you might expect, top tier organisations largely evaded these blows, with 83 percent reporting no IoT security issues in the past two years. They experienced fewer such problems and on the occasion that they did happen, they recovered faster with fewer wounds to patch up and, our survey found, had almost no costs associated with those problems.
In this case, there may be a will, but not a way. One of the details that makes these findings so interesting is that whatever their failures, 77 percent of the respondents in bottom tier organisations considered IoT security to be a big concern. Clearly organisations are broadly cognisant of these problems, but they’re perhaps unable to implement the advice they’ve been given.
On one hand, IoT security is still a nascent and highly specialised field and much like cybersecurity - there are fewer experts in it than the wider world would like. Our survey found that bottom tier organisations were 36 percent more likely to struggle to find the appropriate skillsets to secure their IoT networks. By the same token, there is little to make enterprises secure beyond the often arcane threat of cyberattack - governments, regulators and industry authorities are still struggling to come to any kind of standard as to how enterprises should be securing IoT. From that point of view, the fact that 23 percent of the bottom tier was more likely to find that lack of standards challenging was not too hard.
The IoT is on an interminable path upwards and there’s no stopping it. From that point it's crucial that organisations get to grips with the demonstrably successful best practices that can be found everywhere, but are so often unused.
In an interconnected world which the IoT promises to exponentially grow, we’re not just talking about the security of one enterprise but the security of everyone and everything that interacts with that enterprise. In many cases, the effects of poor IoT security could now impact patient safety, driver safety, public safety, economic systems, and even our way of life.
As I’ve highlighted in this article, there are known IoT security best practices that are protecting the organisations using them. It’s time for everyone to take a serious look at security for IoT and make sure we are acting responsibly.
As more companies are assessing their IoT shortcomings, leaders are embracing security and essential protections such as those that public key infrastructure (PKI) and digital certificates can provide.
Hybrid IT infrastructure - the only game in town? And how important is the edge going to become? The DW March issue includes a Special Focus on Hybrid IT and the Edge, with a mixture of articles and comment from leading figures working in the IT industry. Part 5.
There is a forthcoming power struggle with the potential to take down the incumbent in the compute space, cloud technologies. Currently ascendant, the cloud faces challenges from our mobile lifestyles and the Internet of Things, and it can’t keep up.
By Paul Mercina, Head of Innovation, Park Place Technologies.
The shortcomings of centralised cloud resources are leading some experts to predict that distributed edge computing will eliminate the cloud as we know, but such wholesale replacement of cloud technologies remains unlikely. Just as mainframes exist today, decades after being declared dead, and enterprise data centers still operate despite the exaggerated rumors of their demise, the cloud will survive.
The stage is being set, however, for different levels of “edginess” to complement cloud capabilities and handle tomorrow’s workloads. The balance of compute power is indeed shifting, but both cloud and edge will emerge stronger.
The cloud has taken over the enterprise, as evidenced by the 96 percent adoption rate reported in the RightScale 2018 State of the Cloud Report. So, enticing are cloud solutions, one cloud is no longer sufficient for most organisations, and 81 percent have a multi-cloud strategy, usually incorporating more than four different public and private clouds.
As an efficient, consolidated computing powerhouse, complete with diverse, turnkey solutions for enterprises, the cloud could absorb tomorrow’s data processing demands. The problem is, the data can’t get to the cloud.
Fiber optic cable carries signals at about 122,000 miles per second. It’s fast but unfortunately not fast enough for the use cases on the horizon. Augmented reality, for example, requires latency below that which the human brain can detect, optimally less than 7 milliseconds.
Do the math, and a fiber optic signal takes about .82 milliseconds to go 100 miles. Communicating from Northumberland to London and back would take nearly 5 milliseconds, if the path travelled is a straight shot—and that’s excluding all other sources of latency, from processing to network switching. Even such a short distance is too long.
The fact is, achieving the coveted sub-10 millisecond latency—demanded by driverless cars, smart cities, and many other futuristic solutions—will require local compute. This is the first imperative of the edge.
According to Moore’s Law, compute power doubles every two years. This exponential increase outstrips the pace of network advancement. For example, 10GbE began shipping in 2007, and only in 2018 are there hints of widespread 100GbE adoption. If networks obeyed Moore’s law, the tech sector would be in 640GbE territory.
Networks are also expensive, while compute is getting cheaper. Consider the much anticipated 5G roll-out. Estimates for delivering this wireless technology in South Korea alone—to a population of 51 million and with an excellent fiber backbone to build on—exceeds $8 billion for just one service provider.
Both physically and financially, networks cannot keep up as the world speeds toward 75 billion connected devices generating 175 zettabytes of data by 2025. Making strategic use of available network resources, and deploying compute further afield in its place, will be the only solution. This is the second imperative of the edge.
Edge computing promises to both move data processing closer to the end user, thus reducing latency, and to limit the volume of data transferred over networks, preventing bottlenecks and overload. But “the edge” isn’t one solution. It’s multiple layers, from regionalised cloud services to on-site fog computing and onboard IoT device capabilities. The world’s compute needs will be spread across all of these layers.
A first step into the edge is the expansion among cloud providers into smaller markets. Already, the data center industry is investing in second- and third-tier cities, with regional players being acquired by or merging with larger providers. Telcos may also make use of their extensive real estate portfolios in smaller markets to offer edge solutions. In the near term, moving compute from London to Manchester for an application accessed from northern England could offer edge advantages commensurate with most current technology needs.
Following close behind are likely to be telco offerings coinciding with the 5G introduction. Industry watchers anticipate the installation of micro-modular pods at the foot of cell towers and elsewhere along the wireless network. These would be mostly lights-out mini-facilities with minimal maintenance requirements.
Although the business model for edge products based out of these types of facilities remains fuzzy, enterprises can rest assured that edge capabilities will eventually be packaged for ready consumption, much like cloud services today. This means providers, including today’s CSPs, will be standing in line to help make edge computing as simple as possible, and significant enterprise uptake and an edge-directed shift in compute will ensue.
IoT turns the internet on its head, with end nodes producing more data than they consume. Whereas users mostly spend time on smartphones downloading news, videos, and other media, a single self-driving car, as an IoT example, will generate terabytes of data in a single trip. The question is what to do with all the information that is produced.
For the reasons outlined above—latency and network traffic management—most of this data will be processed onboard or in a very local, wirelessly accessible facility. Only working at the edge will enable the car to “think” fast enough to stop suddenly when an obstacle appears in its path.
The vast majority of the data collected by cars will serve only ephemeral purposes, making possible the near-immediate decisions about acceleration, turning, and so on. It’s difficult to imagine a scenario in which Vauxhall will need to know one-by-one the color of every traffic light encountered on every route by every one of its driverless cars. There will, however, be certain data, generally aggregated across the trip or multiple trips, worth sending to a centralised repository for further analysis and storage. Data will be triaged at the edge—selected, consolidated, and aggregated—before transmission up the line to the cloud.
From a sheer processing perspective, the edge can be expected to handle the bulk of raw data generated by IoT sensors and devices. The industry will shift from a thin client reliant on the cloud to a fat client, or at least a fat edge, doing much of the compute work locally to steer our cars, shut down a manufacturing line when a sensor detects danger, or interpret our gameplay to deliver an appropriate augmented reality experience. But this doesn’t mean the cloud is in decline, to the contrary.
What will emerge is a continuum of solutions ranging from the centralised cloud to regional data centers and telecommunications towers to facility-based fog computing and increasingly powerful edge devices themselves. Distinct workloads will be allocated to the various edge layers.
These technologies are, however, co-evolving with analytics, machine learning, and artificial intelligence. The value of data will only increase alongside its volumes. Data will be triaged at the edge but aggregate information will be passed back to the cloud for storage and higher level processing. More sophisticated systems will be required to derive the insights organisations will seek from the billions of data points soon to be collected, and these centralised resources will represent immense compute power on their own.
Ultimately, compute may soon be more equally balanced between the cloud and the edge, but both will grow for some time to come.
Industry analysts are predicting great things for IoT, with research from Statista predicting that the market will increase to $8.9T in 2020, achieving a Compound Annual Growth Rate (CAGR) of 19.92%. These statistics are impressive, and show the growth in expectations for scale and ROI that IoT will deliver to businesses. This is compounded by statistics from Vodafone, which find that the number of IoT adopters around the world has more than doubled since 2013, clearly demonstrating that the business case for IoT deployments is gaining momentum.
By Nick Sacke, Head of Product and IoT, Comms365.
However, a bleaker picture of the market is provided in the statistics from Cisco’s IoT deployment survey, stating that 60% of IoT initiatives stall at the Proof of Concept (PoC) stage. A PoC is a crucial component of an IoT deployment as it provides the foundations for the fundamental design of the project, as well as the business goals it will address and how soon that deployment will provide a return on investment. It’s also a fundamental factor in ensuring that the project can be rolled out fully beyond the pilot stage, so it’s imperative that businesses carefully consider multiple factors at the PoC stage, to ensure the project does not prematurely stall.
Business Objectives
Whilst it’s true that IoT can address multiple core business objectives, including enhanced business insight and efficiency, it is not a miracle technology that will provide business benefit in isolation – in order to derive true business value the wider challenges must be considered across all departments.
When you consider the multiple opportunities that IoT presents for an organisation, it’s clear why some organisations would be tempted to throw all resources at an IoT initiative straight away. However, when you consider the amount of assets and financial resource that is required for a successful PoC, let alone a full roll out, a business could quickly become overwhelmed. Instead, businesses should take a step back and consider what the primary objectives are, and ensure they are realistic and can be appropriately resourced.
Collaboration is the key to creating a well rounded business case for a successful IoT deployment. By gaining consensus for the project across all departments, from planning the initial PoC all the way through to data analysis and achieving wider corporate objectives, stakeholders across the business can buy into the project from day one, alleviating any potential negative cultural elements such as internal competition, envy and distrust between departments. Inter-departmental collaboration across the organisation combined with an appropriately planned and executed PoC will maximise confidence in the project from the start, and also ensure that the business does not experience the negative ‘silo’ effect of a deployment.
IoT Ecosystem
The level of expertise within an organisation is also a crucial factor in addition to culture and collaboration within a business. Prior to undertaking an IoT initiative, businesses must ascertain whether their internal personnel have the skill set to confidently undertake such a demanding project, or whether seeking the advice and expertise from external partners is required. The Cisco research is clear in the fact that organisations that utilise ecosystem partnerships realise the most successful IoT initiatives, but again this is only true if the providers can also collaborate and work together in a coordinated approach.
Making the Business Case
One significant point to consider when making the business case for IoT is to look beyond the PoC phase and consider what happens thereafter. This is especially important when discussing the financial aspect of the project – for the project to stand a chance of success and kick-start positive change, all stakeholders must be on board from day one and must be prepared to fund the project beyond PoC, with a set budget and timescales in place, otherwise it will never move beyond the pilot stage.
Furthermore, businesses must pinpoint the market challenges, identify the objectives that are to be achieved from the project and measure against these at the end of the PoC and throughout the project. By working towards specific goals – whether that be to identify new revenue opportunities, visibility of customer usage or realigning business processes – an organisation can generate tangible data results and in turn achieve the highest return from the IoT deployment. With parameters in place that can be analysed, the business case can be expanded further as the primary objectives can be measured against quantifiable IoT data. By encouraging regular communication of this data across each business department, this will keep the focus towards common goals and ensure each stakeholder has continued visibility of how the solution is impacting the business.
Project Optimisation
It’s fair to say that even with business-wide collaboration from the beginning and consideration of multiple core business objectives, ongoing adjustments will still be required in order to keep the project on track. By optimising the processes and fine-tuning the technology, especially during the pilot phase, the project outcomes can be enhanced and major disruption can be averted– tenacity is key.
Conclusion
Whilst IoT is not a simple one-step deployment technology, there are several tangible benefits that can be realised early on, positively impacting an organisation’s bottom line and processes. It is essential however that businesses do not underestimate the resources required for a successful IoT deployment. By implementing the key formula of full business-wide collaboration, partnering with market experts and identifying multiple core business outcomes that stakeholders agree on from the start, organisations can lay the foundations for a strong commercial case and a successful deployment of IoT initiatives.
Hybrid IT infrastructure - the only game in town? And how important is the edge going to become? The DW March issue includes a Special Focus on Hybrid IT and the Edge, with a mixture of articles and comment from leading figures working in the IT industry. Part 6.
Says Henrik Nilsson, RVP EMEA, Apptio.
In a recent survey Apptio conducted with IT leaders, we found that the ideal mix between cloud and on-premises infrastructure is 70/30. This is because cloud provides the flexibility to enable IT teams to focus more on innovation, while on-premises is a key baseline for certain mission-critical applications.
However, the 70/30 split isn’t a magic number for all enterprises, and hybrid IT is often more a natural result of digital transformation rather than an essential driver of it. Shifting applications and workloads to the cloud may of course form a large part of such a strategy – 92% of IT leaders prefer using cloud to on-premises infrastructure – but digital transformation shouldn’t just be an unbridled move to the cloud because that represents ‘digital’. It should instead be about results and performance, which means carefully choosing the best infrastructure solution for each task, whether that’s cloud or on-premises.
For example, public cloud is regularly being adopted for processes which can afford to have less direct oversight from IT departments due to its flexibility and cost-saving potential. However, it’s often the case that certain mission-critical applications need to remain on-premises whether that’s due to security concerns or the closer control over resources it provides compared to cloud.
The challenge for enterprises in doing this effectively is being able to monitor and analyse resources usage and spend across all parts of the infrastructure. Trying to collate and compare on-premises and cloud data can be time consuming and inefficient – throw in multiple cloud providers, both private and public, along with the rise of edge, and hybrid IT can become a challenge to manage effectively.
CIOs and other IT leaders need to be prepared to embrace the hybrid IT as the best platform for modern IT departments, but on a level that suits their own business, while finding the best approach to monitoring and managing its usage.
Evanna Kearins, Vice President Global Field Marketing, DataStax, offers comment on the evolution of the hybrid IT landscape and whether such an approach is a 'nice to have' or essential for digital transformation
Companies are moving over to cloud, and they want to take a hybrid approach to avoid lock-in to specific cloud providers. This approach is developing quickly around new applications and green-field projects, but it is more difficult to achieve this around data.
For applications, compute and storage are easier to manage in distributed environments. However, the data side is more difficult, as traditional databases were not developed to run in this manner. These technologies scale up based on a single node that manages processing, and then a number of nodes that carry out the work. That approach does not scale up to cloud.
Distributed computing has to run across multiple locations. This has to run across different cloud platforms, mixes of internal private and public cloud, and internal data centres. Without this approach on the data side, it’s difficult to deliver on digital transformation initiatives that can really scale.
Developers are looking at approaches like containers to build new services without being tied to a specific cloud provider. However, this approach only takes you so far. You have to have this same level of independence at the data layer as well. Open source projects like Apache Cassandra™ can provide this ability to run across multiple locations and services independently, while enterprise-class versions of Cassandra can supplement this with improved performance and additional functionality.
For companies that are looking at digital transformation and running at scale, the ability to grow in a predictable and planned fashion is essential. Cloud native IT designs can help you deliver that scalability while managing costs, and Cassandra is an amazing fit for this.
Chris Fielding, CTO, Sungard Availability Services, comments:
IT decision makers can clearly see the benefits which follow hybrid cloud deployment: increased business agility (46 percent), improved customer service (41 percent) and faster product development (34 percent). However, our study of 500 international IT decision-makers also revealed that the convergence of multiple technologies and service-delivery models means that managing a hybrid IT model has become immensely complex, often hindering their ability to innovate. What used to be in one datacentre now spans geographies, partners, hybrid cloud, legacy systems, and applications.
Now, hybrid IT is the enterprise norm. In fact, Gartner says a massive shift to hybrid architecture services is underway, with 90 percent of organisations to have adopted hybrid infrastructure management by 2020.
What are the challenges?
A sprawling and diverse IT landscape can make visibility and control of business applications elusive. Multiple, disconnected IT infrastructures cause 'broken' business processes, service disruptions and delays, while managing and securing a mix of IT infrastructures can be costly and complicated, especially when data can reside nearly anywhere. This issue is compounded by the behaviour of individual users or administrators on the network, who may deploy cloud resources without the IT organisation knowing - known as shadow IT.
Additionally, the older an organisation, the more complex IT architectures tend to be. Years of layering applications and incomplete or inconsistent architecture mapping makes older technology harder to scale and slower to change. As a result, enterprises must perform a cost-benefit analysis between the lengthy process of migrating legacy applications to modern infrastructures versus ripping out old systems and building up from the beginning.
What are the benefits?
Hybrid IT provides a wide range of options for delivering IT services. From a strategic standpoint, it enables IT decision makers to align specific technology platforms with specific applications and workloads whilst also meeting the unique needs of different business groups, units, suppliers and customers. It also comes with the benefit of choice, as IT decision makers can distribute architectures over multiple cloud environments, such as Managed Amazon Web Services (AWS) or managed private clouds.
Moving mission-critical applications is a question of ensuring stability during change, i.e. moving to new IT service delivery models while maintaining environments that run core applications and workloads. IT decision-makers should prioritise which applications need to be moved from legacy to more scalable and change-ready platforms. This should be based upon where the value of Hybrid IT (i.e. speed, flexibility and efficiency) can be most noticeably increase the service provided to end users.
Alessandro Bruschini, Infrastructure manager, Aruba, observes:
According to the Observatory’s Research of the Politecnico in Milan, in 2018, the use of cloud grew by 18% and by 28% when it comes to Hybrid and Public clouds compared to 2017. And this resonates with the conversations we are having with our customers and prospects who are increasingly asking for ‘hybrid cloud solutions’.
This trend is explained by a change in customer’s expectation when they turn to the cloud. In the last few years, the focus has shifted from the type of platform (private or public) to a more pragmatic approach that focuses on flexibility. Previously, when choosing a cloud solution, organisations would first look to see if it was a private or public solution. Today, they are more interested in flexibility than the exact type of cloud, making it easier for organisations to transition from their own infrastructure to a new and more efficient infrastructure.
It’s the hidden cost behind every text message, every Facebook Like, and every Skype call. It’s something that professionals within the data centre industry have been aware of for years, but which is beginning to surface in the public consciousness. And it’s something that keeps data centre managers awake at night. Power.
By Steve Beer, Sector Manager of Energy Solutions at EDF Energy.
Globally, data centres account for around three per cent of total electricity usage each year. This equates to nearly 40 per cent more than the entire United Kingdom. By 2025, it is estimated that data centres will consume one fifth of all the electricity in the world. It’s been estimated that data centres emit roughly as much CO2 as the airline industry, so there’s a big carbon footprint to consider as well.
Increasing demand for data may be good news – representing the development of bigger, better and faster data processors in an increasingly connected world – but for data centre managers this development represents a soaring demand for energy.
Whilst the power usage effectiveness (PUE) of data centres – the ratio of total power required to run the facility set against the power directly involved in compute and storage – has improved over the past ten years, energy costs can still make up as much as 70 to 80 per cent of operational expenses. The level of successful improvement in PUE means that the era of quick energy efficiency fixes has passed.
Data centre energy efficiency has been a topic of interest for at least the last ten years, with the European Code of Conduct for Data Centre Energy Efficiency programme – a voluntary initiative created in response to the increasing energy consumption in the sector – dating back to 2008. But the industry now has the opportunity to take a fresh approach, and to make significant steps forward.
Energy management should incorporate the key aims of finding new ways to mitigate costs, improve resilience and reduce data centres’ carbon footprint. Crucially, it should now be seen as a strategic issue as well as an operational one.
Several years ago, we would see data centre managers constantly looking around for different energy suppliers who would be able to provide them with the cheapest rate for power supply contracts. But wholesale costs have increased significantly and this makes pure cost control a difficult game to play.
Those in charge of energy management for data centres are now further ahead in their thinking than most other sectors, and are increasingly looking for added value from their energy providers as they look to progress past the low-hanging energy efficiency fruit that once served them so well.
Appropriately, energy efficiency for data centres has most often started with data. Understanding live energy usage through real-time monitoring helps to identify the simple steps that can be taken to make significant energy savings. The data may reveal behavioural changes that need to be made, such as ensuring lights are turned off when the centre is not manned, or it might suggest a more technical solution, such as installation of an automated temperature controlled cooling system.
When it comes to making the next steps along the energy efficiency journey, one of the key issues data centre managers face is that their business requirements do not accommodate some of the efficiency measures that are open to other types of businesses.
For example, there is growing awareness of the importance of flexibility in energy efficiency. Any business that is able to shift its intensive energy consumption out of peak demand times when energy costs are at their highest, has the opportunity to make significant savings. For example, one tile manufacturer that we have been working with at EDF Energy achieved £45,000 in annual energy savings simply by reacting to our Triad alerts and moving their manufacturing processes out of the peak winter weekday hours of 4pm to 7pm, when energy prices are highest.
Data centre businesses are often less able to shift energy usage out of expensive times, but there are still other ways to start reducing costs. Searching for the places where there may be spare or latent energy capacity is the first step to benefiting from Demand Side Response (DSR). The principle of DSR is that organisations sell their spare energy capacity back to the grid at peak times to help meet demand – a simple measure that both generates revenue and helps the UK’s energy infrastructure.
Data centres are often able to increase their energy efficiency using DSR through actions such as using thermal inertia from their servers to generate energy. This can then be used on site or sold back to the grid with the help of an energy aggregator.
The strategic appeal of DSR for data centre managers is not simply the ability to deliver cost savings or create extra revenue streams. It also helps to elevate resilience investments so that they are more than simply risk mitigation measures, becoming revenue-generating opportunities as well.
The Service Level Agreements guaranteed by most data centres mean that power cuts or energy supply issues can quickly lead to serious consequences. One data centre provider was recently fined over a million pounds for an outage of only twelve minutes. Power supply is a business critical issue.
Many companies are investing in backup generators or batteries to bolster their resilience and reduce the risks associated with outages. The opportunity offered by DSR is to turn the latent capacity in a generator or battery into a revenue stream, monetising these assets to create return on investment and to decrease the pressure on data centres’ ongoing energy budgets. The cost of batteries can be recouped in as little as seven years with this approach.
Once a data centre provider has decided to invest in onsite battery capacity, the opportunity to embrace the simpler principles of energy efficiency – such as charging batteries overnight when energy costs are lowest – are back on the table. Many companies are also well-placed to consider onsite renewable energy generation as part of their battery investment. Large, flat roofs are the perfect setting for solar panels or small wind turbines to charge onsite batteries.
With more and more demand, the energy needs of the data centre industry will increase. At the same time, the cost of non-renewable energy is trending upwards. Data centres can move from being passive consumers of energy, and instead become active users, traders and generators of it.
They must now demand more added-value from energy partners, looking for those who are able to help them control, trade and optimise their energy use all in one place.
Hybrid IT infrastructure - the only game in town? And how important is the edge going to become? The DW March issue includes a Special Focus on Hybrid IT and the Edge, with a mixture of articles and comment from leading figures working in the IT industry. Part 7.
Emerging applications for AI will depend on System-on-Chip devices with configurable acceleration to satisfy increasingly tough performance and efficiency demands.
By Dale Hitt, Director Strategic Marketing Development at Xilinx.
As applications such as smart security, robotics, or autonomous driving rely increasingly on embedded Artificial Intelligence (AI) to improve performance and deliver new user experiences, inference engines hosted on traditional compute platforms can struggle to meet real-world demands within tightening constraints on power, latency, and physical size. They suffer from rigidly defined inferencing precision, bus widths, and memory that cannot be easily adapted to optimize for best speed, efficiency, and silicon area. An adaptable compute platform is needed to meet the demands placed on embedded AI running state-of-the-art convolutional neural networks (CNN).
Looking further ahead, the flexibility to adapt to more advanced neural networks is a prime concern. CNNs that are popular today are being superseded by new state-of-the-art architectures at an accelerating pace. However, traditional SoCs must be designed using knowledge of current neural network architectures, targeting deployment typically about three years in the future, from the time development starts. New types of neural networks such as RNNs or Capsule Networks are likely to render traditional SoCs inefficient and incapable of delivering the performance required to remain competitive.
If embedded AI is to satisfy end-user expectations, and – perhaps more importantly – keep pace as demands continue to evolve in the foreseeable future, a more flexible and adaptive compute platform is needed. This could be achieved by taking advantage of user-configurable multi-core System on Chip (MPSoC) devices that integrate the main application processor with a scalable programmable logic fabric containing configurable memory architecture and signal processing suitable for variable-precision inferencing.
Inferencing precision
In conventional SoCs, performance-defining features such as the memory structure and compute precision are fixed. The minimum is often eight bits, defined by the core CPU, although the optimum precision for any given algorithm may be lower. An MPSoC allows programmable logic to be optimized right down to transistor level, giving freedom to vary the inferencing precision down to as little as 1-bit if necessary. These devices also contain many thousands of configurable DSP slices to handle multiply-accumulate (MAC) computations efficiently.
The freedom to optimize the inferencing precision so exactly yields compute efficiency in accordance with a square-law: a single-bit operation executed in a 1-bit core ultimately imposes only 1/64 of the logic needed to complete the same operation in an 8-bit core. Moreover, the MPSoC allows the inferencing precision to be optimized differently for each layer of the neural network to deliver the required performance with the maximum possible efficiency.
Memory Architecture
As well as improving compute efficiency by varying inferencing precision, configuring both the bandwidth and structure of programmable on-chip memories can further enhance the performance and efficiency of embedded AIs. A customized MPSoC can have more than four times the on-chip memory, and six times the memory-interface bandwidth of a conventional compute platform running the same inference engine. The configurability of the memory allows users to reduce bottlenecks and optimize utilization of the chip’s resources. In addition, a typical subsystem has only limited cache integrated on-chip and must interact frequently with off-chip storage, which adds to latency and power consumption. In an MPSoC, most memory exchanges can occur on-chip, which is not only faster but also saves over 99% of the power consumed by off-chip memory interactions.
Silicon Area
Solution size is also becoming an increasingly important consideration, especially for mobile AI on-board drones, robots, or autonomous/self-driving vehicles. The inference engine implemented in the FPGA fabric of an MPSoC can occupy as little as one-eighth of the silicon area of a conventional SoC, allowing developers to build more powerful engines within smaller devices.
Moreover, MPSoC device families can offer designers a variety of choices to implement the inference engine in the most power-, cost-, and size-efficient option capable of meeting system performance requirements. There are also automotive-qualified parts with hardware functional-safety features certified according to industry-standard ISO 26262 ASIL-C safety specifications, which is very important for autonomous-driving applications. An example is Xilinx’s Automotive XA Zynq UltraScale+ family, which contains a 64-bit quad-core ARM® Cortex™-A53 and dual-core ARM Cortex-R5 based processing system alongside the scalable programmable logic fabric, giving the opportunity to consolidate control processing, machine-learning algorithms, and safety circuits with fault tolerance in a single chip.
Today, an embedded inference engine can be implemented in a single MPSoC device, and consume as little as 2 Watts, which is a suitable power budget for applications such as mobile robotics or autonomous driving. Conventional compute platforms cannot run real-time CNN applications at these power levels even now, and are unlikely to be able to satisfy the increasingly stringent demands for faster response and more sophisticated functionality within more challenging power constraints in the future. Platforms based on programmable MPSoCs can provide greater compute performance, increased efficiency, and size/weight advantages at power levels above 15W, too.
The advantages of such a configurable, multi-parallel compute architecture would be of academic interest only, were developers unable to apply them easily in their own projects. Success depends on suitable tools to help developers optimize the implementation of their target inference engine. To meet this need, Xilinx continues to extend its ecosystem of development tools and machine-learning software stacks, and working with specialist partners to simplify and accelerate implementation of applications such as computer vision and video surveillance.
Flexibility for the Future
Leveraging the SoC’s configurability to create an optimal platform for an application at hand also gives AI developers flexibility to keep pace with the rapid evolution of neural network architectures. The potential for the industry to migrate to new types of neural networks represents a significant risk for platform developers. The reconfigurable MPSoC gives developers flexibility to respond to changes in the way neural networks are architected, by reconfiguring to build the most efficient processing engine using any contemporary state-of-the-art strategy.
More and more, AI is being embedded in equipment such as industrial controls, medical devices, security systems, robotics and autonomous vehicles. Adaptive acceleration leveraging programmable logic fabric MPSoC devices holds the key to delivering the responsive and advanced functionality required to remain competitive.
To be sure, the concept of digital transformation isn’t new. A combination of possible benefits which can derive from evolving an organisation’s IT strategy have been widely discussed over the past few years. These advantages include the potential to improve corporate differentiation, customer satisfaction and speed-to-market, as well as facilitate growth and scale and boost profitability.
By Joe Garber, global head of strategy and solutions, Micro Focus.
Yet despite all the fervour, very little progress has been made beyond the concept and planning stages so far. According to Gartner, for example, “only one-third of enterprises have managed to reach the scaling stage of digital business.” However, there are four crucial reasons why this will change in 2019.
1) The tangible benefits are clearer
It’s easy to talk at a high level about IT driving positive business outcomes, but it is often more difficult to move from theory to practice. Fortunately, some important macro trends are helping crystallise the opportunity. Corporate CEOs are prioritising revenue advancement again and CIOs are again seeing budgets free up for driving growth initiatives. Plus, some grassroots data points are surfacing from those early adopters of digital transformation solutions which are serving as a catalyst for others. Examples include:
With these types of benefits now being substantiated in the market, it will be much easier for IT executives to build a business case for investment throughout 2019.
2) It’s no longer a “boil the ocean” proposition
Digital transformation touches many parts of the organisation, and success thus far has largely been positioned as requiring a complete overhaul that takes place all at one time. This is a scary proposition for many IT executives who see the inherent risks of driving so much change at one time. Budgets aren’t carved out to support such a strategy as well.
The reality is each organisation is unique and will have a different set of prioritised needs. What’s much more desirable is the ability to start small – perhaps with a few core projects, or with one specific department or geography – and then have the ability to add on. This also provides flexibility to pivot the strategy as business needs inevitably evolve.
Fortunately, many technology vendors are listening and packaging up solutions tailored to critical use cases, which are optimised to supplement one another over time. This removes an important roadblock in moving to the execution phase, so they can address this opportunity on their own terms.
3) The core elements of success have crystallised
Lewis Carroll once suggested “If you don’t know where you are going, any road will take you there” in Alice's Adventures in Wonderland. This is just as true in IT as it is in literature. Virtually every technology company develops roadmaps to stay focused. While certainly not trivial when constrained to a given project or product, this practice is much more difficult when looking at something as complex and wide-ranging as digital transformation.
Fundamentally, what organisations are trying to do is achieve greater speed and agility, while taking measures to drive the top line and protect the bottom line. This naturally translates to four core focal points:
· Enterprise DevOps – By prioritising DevOps, organisations can reduce operational friction with automation and enhanced collaboration. Additionally, business confidence can be boosted knowing that quality and security are optimised, and better outcomes can be delivered with a level of ongoing assessment and course correction on core business processes.
Hybrid IT management – Through hybrid IT, services can be delivered on demand and operational and business insights can be generated to both extend existing investments and take advantage of new platforms – from containers, to public clouds and IoT.
· Predictive analytics – By utilising advanced analytics companies are able to learn more about unmet customer needs, under-funded parts of the business, and emerging business models to drive the top line.
· Security, risk and governance – Safeguarding the assets that matter most to organisation - identities, applications and data - is important all the time. However, it is perhaps never more relevant than during a period of transition, when processes and technology are evolving.
By concentrating on these guiding principles and setting measurable milestones against each, organisations will be better prepared to achieve lasting success.
4) Existing investments and processes can be leveraged
In many circumstances, an organisation has made significant investments in technology that are driving real benefits. IT professionals know that ripping out these systems and starting from scratch may have some hidden downstream implications in terms of risk such as breaking tried-and-true processes, cost, and down time. Instead, they are looking for a strategy that allows them to modernise existing technology and extend current investments with software – in other words, bridging the old and the new. Today, digital transformation technology is available that is more interoperable with legacy systems. This will put the organisation in a much stronger position to run and transform the business.
2019 is the year that digital transformation will shift to mainstream. Much has been said about the advantages it can bring – and organisations have taken notice. In fact, Gartner has stated “only 4% of organisations have no digital initiative at all.” However, while very little progress has been made beyond the concept and planning stages so far, this is expected to change as the why, what, and, importantly the how become clearer. As a result, organisations will feel more empowered to move forward on what could be their most significant IT journey to date.
Hybrid IT infrastructure - the only game in town? And how important is the edge going to become? The DW March issue includes a Special Focus on Hybrid IT and the Edge, with a mixture of articles and comment from leading figures working in the IT industry. Part 8.
Mike Cockfield, Managing Director, Khaos Control, outlines the journey to hybrid IT without compromise.
IT deployment is in transition. Traditional on premise models lack the speed and agility required, yet the cloud isn’t suitable for every application or workload. In theory, a hybrid approach offers a unique alternative that blends the most appealing aspects of both. But running one application on premise and another in the cloud throws up any number of compromises, from data integration and synchronisation to security and compliance.
Furthermore, with vendors’ on-premise and cloud solutions typically having nothing in common but the product name, companies are forced to take a leap of faith. Does the subscription cloud model outweigh the value of the bespoke on-premise development? Is the agility provided by easily adding or removing sales channels such as eBay, worth compromising existing proven back office processes and business logic? Will moving the cloud result in a complete loss of control?
It doesn’t have to be that way.
Inexorable Shift
The move towards cloud based software deployment appears inexorable, as companies look to exploit the inherent flexibility offered by this subscription based model. From minimising up front capital investment (Capex) to speed of deployment and anytime, anywhere use, the cloud promises low cost, low risk access to new technology, with none of the traditional IT concerns regarding back up and security.
Yet while cloud migration is without doubt a central tenet of any digital transformation strategy, few companies are in a position to make a wholesale switch from on premise to the cloud. Indeed, most experts agree that it will be a decade at least before the majority of organisations achieve a complete, 100% cloud model. From concerns regarding security to the need to retain bespoke software developments and, of course, on-going economic uncertainty and fears for business disruption, companies have myriad reasons to retain core business applications on premise.
For the time being, therefore, most companies are deploying a hybrid IT model, balancing on-premise with cloud solutions. But how hybrid is hybrid? In the vast majority of cases organisations opt to run one application, such as ERP, on premise and another, such as CRM, in the cloud. While this ticks the hybrid box and enables organisations to assess the generic pros and cons of each deployment model, it doesn’t actually address the primary concerns: namely the risk associated with moving a core business application to the cloud and in many ways is creating a host of new challenges.
Disjointed Hybrid Models
These disparate solutions are not integrated – which means if companies want to create consistency between data sets, they are reliant upon notably unreliable synchronisation routines. Furthermore, this disjointed hybrid model creates a confusing working environment for staff. Sales people using cloud based CRM for example, will gain huge benefit from the real-time and continuous mobile access to customer contact data and demand similar access to the ERP system to enable immediate and anytime order placement. Yet when the ERP is on premise, the only way to achieve secure remote access is through (Virtual Private Networks) VPN, which is far from satisfactory; while the user experience will not be designed for mobile workers, especially those attempting to access information via mobile phone.
Indeed, the user disruption is not just associated with the deployment model: in the vast majority of cases, the cloud version of an application bears little resemblance to the on premise solution. The code is different. The functionality is different. The user interface is different. And, critically, none of the bespoke developments are available. These two products are the same in nothing more than name. And that creates its own challenges – not least in user acceptance and training.
Essentially, even if a company sticks with the same vendor, when moving from an on premise to cloud approach, there is far more to the decision than Capex versus Opex and reduced IT overhead. Core issues such as operational processes and control must also be assessed and resolved.
In many ways, it is this stark either/or choice that is delaying the cloud evolution. For many companies the risk is just to big; the upheaval too disruptive.
True Hybrid
So, what are the options? Where is the true hybrid approach that offers a seamless and phased migration of the same product from on premise to the cloud (and back again, if strategy changes?) When it comes to systems for ERP and multi-channel retailing, it may surprise many people to discover there are just two solutions that offer this true hybrid IT model: one from industry behemoth Microsoft, and one from small Lincolnshire software developer Khaos Control. These two companies provide a single solution that can be deployed across on premise and cloud – as well as ecommerce. Essentially this is one database, one set of code and one set of business logic.
The value? Essentially the same business information and logic is automatically deployed across the entire solution, irrespective of location. With one database shared by back office, ecommerce and mobile/ remote staff, customer information is always up to date; pricing is always consistent; and product availability always accurate. Critically, the solution can be deployed in line with business needs without incurring additional risk – essentially a company can explore the value of secure cloud access for sales staff, while using the same system to provide content management for the ecommerce site and delivering the familiar back-office functionality on premise.
With one source of information there is no need for flaky synchronisation routines; while remote staff can have immediate and secure access to deep information resources, with no need for clumsy VPNs. GDPR and PCI DSS compliance processes are not compromised, and by removing reliance on VPNs data security is significantly enhanced.
Furthermore, for businesses this provides a clear migration path, with both lower risk and greater agility. For example, a business can trial a sales channel such as eBay or Amazon, to extend the multi-channel mix using the cloud based subscription model. If it works, great; if it is not the right platform for the market, then simply turn it off and try a different channel. Critically, this can all be achieved without the upheaval of changing the core ERP solution. In this hybrid model, any existing bespoke developments are retained; indeed, additional development is still an option.
Retaining Control
IT deployment is clearly in transition, and there is no doubt that some form of hybrid model will dominate for the next decade at least. Yet so many of the hybrid deployments in place are far from ideal. They do not offer the ‘best of both worlds’, instead are forcing companies to make compromises and risk losing control.
In contrast true hybrid IT solutions that deliver one solution across on premise, cloud and ecommerce really do provide the agility and flexibility businesses seek – without the compromise of lost control. In addition to the inherent data control delivered by a single database solution, companies can retain proven business processes and continue to leverage existing business logic whilst exploring the value of cloud based deployment for certain areas of the business – such as the sales team.
Essentially, with the right approach, hybrid IT can provide a business with a safety net, facilitating an experimental and phased approach to cloud migration without the need to make high risk, wholesale decisions regarding either solution or deployment model. No leap of faith required.
Identity-related breaches continue to be a major problem that plagues organisations. According to the Verizon Data Breach Investigations Report 2018, compromised user credentials are still a primary cause of breaches and the rate isn’t slowing down.
By Karl Barton, Head of International Channels and Alliances at SecureAuth.
Despite an increase in security spend, identity security focus is lagging. Bad actors increasingly discover new tactics and ways around traditional authentication methods, yet some organisations continue to use old and vulnerable methods such as password-only alone, or basic two-factor authentication (2FA) methods. To combat the identity-security issue, forward thinking organisations are turning to modern authentication techniques and actionable analytics to maximise security, while enabling users and the business.
The importance of identity security
Along with network and endpoint, identity is the third pillar to an effective cybersecurity strategy and should be treated as such. With 81% of breaches being due to compromised credentials, it is absolutely essential login attempts are assessed ensuring that the person is who they say they are and have the associated privileges to access sensitive and valuable information both when working on premises or logging in remotely. Authenticating the identity of the individual is ‘ground zero’ of security, especially as attackers posing as legitimate employees tend to lurk in networks and endpoints, escalating privileges and moving laterally to complete their mission.
Using a password-only method of authentication is not a secure option and in the past was fortified by adding a second factor, or two-factor authentication (2FA). However, as seen in recent headlines, 2FA is no longer sufficient in the face of sophisticated attackers. Answers to knowledge-based questions can easily be socially engineered, hard tokens can be compromised, popular push notifications have been routinely falsely accepted, and one-time passcodes delivered via SMS/text can be spoofed. Effective security against today’s evolved attackers requires advanced techniques that goes beyond 2FA providing maximum security but also maintains a frictionless and seamless user experience.
Even the most robust defences will fail if the user experience is not considered and prioritised. If measures excessively restrict user access or if restoration of access is time-consuming and arduous, then workarounds will inevitably be introduced to address frustration. From a business perspective, devoting resources to restoring accounts can negatively impact productivity and can be expensive with calls to the help desk.
An adaptive approach in action
As identity is at the centre of many data breaches, an approach that focuses on understanding and protecting identity is critical. Forward thinking organisations are securing points of access by using adaptive access control. These methods look at multiple factors to analyse the risk associated with a login and determine if they should be granted access, stepped up to provide further proof of identity, or denied. Adaptive authentication methods perform risk checks that run in the background not interfering with the user’s experience. For example, it looks at factors such as geo-location (is the user in a known ‘bad’ location or a location that is out of the ordinary for the user such as being in a different country, device recognition (is the user using a new or unknown device, or a known stolen device) and IP address (is the IP address associated with anomalous internet infrastructure used by attackers) to verify and authenticate an individual. These techniques strengthen security, detects risk and works invisibly to the user thwarting attacks and rendering stolen credentials useless.
SecureAuth worked with a large UK-based financial services enterprise to secure and protect its customer portals, to offer authentication that adapted to the user’s needs and preferences. For instance, by using demographic information to give the most appropriate authentication method based on market research. In addition, repeat users enjoyed a frictionless experience without repeat login requests prompting them for further proof of identification as they were low risk. This greatly reduced the number of times that credentials were requested and improved the overall user experience, highlighting how with modern authentication approaches; increased security doesn’t have to impact user experience.
The path to passwordless
Regardless of how corporate risk levels are defined, an adaptive approach to authentication adjusts to the risk level in question. Authentication data is evaluated to determine whether to allow any request for a resource, interaction or transaction and to challenge additional authentication factors if a suspicious login attempt is detected. Adaptive authentication can eliminates the need for a password, and when paired with a mobile app and fingerprint biometric, paves the way for a passwordless enterprise. As the threat landscape changes, security must change with it and stay ahead of it, in a way that keeps organisations sustainable and agile. This will ensure that organisations will continue to remain one step ahead of threat actors.
Hybrid IT infrastructure - the only game in town? And how important is the edge going to become? The DW March issue includes a Special Focus on Hybrid IT and the Edge, with a mixture of articles and comment from leading figures working in the IT industry. Part 9.
According to Iain Shearman, MD, KCOM NNS.
Edge computing is the critical enabler of cloud-based applications and is the key to providing the fast processing speeds necessary to take advantage of internet of things (IoT) applications. There’s no denying the need for the cloud, but those businesses who restrict themselves to its capabilities could be wasting their investment, not to mention enduring detrimental inefficiencies.
A hybrid I.T. landscape combines the best of both the centralised cloud and the decentralised edge platform. It’s a pairing that enables a business to tailor how it structures its systems to allow for maximum I.T. performance, making the distinction between processes that require large data processing - tackled by the cloud - and those that require immediate attention, for example.
Edge data centres can take many forms but generally they fall into one of the following three categories; local devices to service a particular purpose, smaller localised data centres with notable processing and storage capabilities and regional data centres that serve a big, but localised population.It is not the size of the data centre that matters when it comes to the edge, but rather its proximity to the source of data that requires processing, or even those people who are consuming it. The close distance of the edge’s regional data centres solves the issue of latency as businesses can have them wherever they need to for regulatory compliance.
By reducing long-distance communication, edge computing results in low latency and ultrafast responses enabling almost instantaneous decision making or presentation to the end-user - online retail purchases are a typical example of where fast, seamless connection is required to complete processes, satisfying the customers’ short attention spans.
A hybrid I.T. infrastructure, one which shares the workload between the cloud and the edge, enables quicker processing, driving operational efficiency and takes the strain off a company’s underlying network infrastructure. The need for both I.T. infrastructures, the cloud and the edge, still exists but it is the latter that will unlock IoT applications and with that drive not only new revenue streams but even entirely new businesses.
Comment from Dan Pitman, principal security architect at Alert Logic:
“It can be said that all organisations operate hybrid infrastructure or services in some sense. From a cost and operational model, managing end-user systems such as desktops and mobile, along with hosting systems to provide applications to those users have always had distinct challenges. The introduction of cloud, be it software, platform or infrastructure-as-a-Service brought about the most significant opportunity for digital transformation. It is the ability to cheaply and rapidly innovate that is an essential for digital transformation, from a infrastructure and platform point of view.
The hybrid infrastructure state is likely representative of an extended period of transition away from traditional infrastructure - Running systems on-premises or in managed services can be very effective for digital transformation projects, even required at times, but hybrid infrastructures, mixtures of traditional hosting or on-premise systems with cloud services, is very often a have-to-have rather than anything else. This can be due to budgetary or technical challenges, or just a lack of willingness to push forward and take some risks if necessary. In some cases, security is cited as a reason that not everything can be migrated somewhere else.
It is arguable that hybrid systems widen the attack surface and make organisations inherently less secure. The effort required to worry about the underlying systems and infrastructure that, potentially similar, applications depend on goes up significantly and monitoring, change and testing systems become harder to source that fit the different environments. Hybrid ultimately introduces complexity and complexity impacts people, process and technology in different but equally detrimentally ways. Authentication and Identity systems that span the hybrid building blocks increase risk further.
One significant problem is the attachment of transformation to what are effectively migration projects. Movement of systems is error prone at best and organisations must devote time to rigorous security testing as well as the migration, which leaves little room for transformation in that first phase. It is better to have a stable platform to build upon when transforming something and nothing does digital transformation a disservice more than failure of digital systems that are supporting it.”
Chris Fielding, CTO, Sungard Availability Services, believes that IT decision makers can clearly see the benefits which follow hybrid cloud deployment: increased business agility (46 percent), improved customer service (41 percent) and faster product development (34 percent).
“However, our study of 500 international IT decision-makers also revealed that the convergence of multiple technologies and service-delivery models means that managing a hybrid IT model has become immensely complex, often hindering their ability to innovate. What used to be in one datacentre now spans geographies, partners, hybrid cloud, legacy systems, and applications.
Now, hybrid IT is the enterprise norm. In fact, Gartner says a massive shift to hybrid architecture services is underway, with 90 percent of organisations to have adopted hybrid infrastructure management by 2020.
What are the challenges?
A sprawling and diverse IT landscape can make visibility and control of business applications elusive. Multiple, disconnected IT infrastructures cause 'broken' business processes, service disruptions and delays, while managing and securing a mix of IT infrastructures can be costly and complicated, especially when data can reside nearly anywhere. This issue is compounded by the behaviour of individual users or administrators on the network, who may deploy cloud resources without the IT organisation knowing - known as shadow IT.
Additionally, the older an organisation, the more complex IT architectures tend to be. Years of layering applications and incomplete or inconsistent architecture mapping makes older technology harder to scale and slower to change. As a result, enterprises must perform a cost-benefit analysis between the lengthy process of migrating legacy applications to modern infrastructures versus ripping out old systems and building up from the beginning.
What are the benefits?
Hybrid IT provides a wide range of options for delivering IT services. From a strategic standpoint, it enables IT decision makers to align specific technology platforms with specific applications and workloads whilst also meeting the unique needs of different business groups, units, suppliers and customers. It also comes with the benefit of choice, as IT decision makers can distribute architectures over multiple cloud environments, such as Managed Amazon Web Services (AWS) or managed private clouds.
Moving mission-critical applications is a question of ensuring stability during change, i.e. moving to new IT service delivery models while maintaining environments that run core applications and workloads. IT decision-makers should prioritise which applications need to be moved from legacy to more scalable and change-ready platforms. This should be based upon where the value of Hybrid IT (i.e. speed, flexibility and efficiency) can be most noticeably increase the service provided to end users.”
Analytics at the edge - Why IoT can’t end up the same as Big Data
IoT has yet to deliver on its promises; if this is not one more technology doomed to disappear down the same path of failure as Big Data, IoT deployment needs a rethink. Peter Ruffley, Chairman at Zizo, outlines the value of an edge analytics model that delivers both context and actionable insight from IoT.
Fundamentally flawed
The promise of IoT appears to have become its Achilles Heel. Vast armies of sensors transmitting invaluable data in real time that can be rapidly analysed and then used to optimise performance sounds compelling – but the vast armies create vast quantities of data that cannot be transmitted to a central repository in real time, cannot be rapidly analysed and the results cannot then be used in any intelligent or timely fashion. The sheer scale of IoT is overwhelming – and, as IoT is being deployed right now, it could easily become the next ‘Big Data’ fiasco, a win for no one but the IT industry.
Every aspect of the current IoT model as it stands today is flawed. Firstly, it is simply not practical – or affordable – to transfer all of the data to a central repository. And while companies may think the advent of fibre and 5G will solve that problem; think again. Bandwidth will never keep pace with the volume of data generated by IoT devices.
Secondly, even the data scientists presiding over these vast data resources are somewhat befuddled. What, exactly, is the value of a taking all the data from, for example, a petrol station and creating an incredibly accurate and in depth view of the ‘average’ petrol station? Especially when that analysis will take hours or days, rather than the minutes or, preferably, seconds required to deliver tangible value. The promise of IoT was to leverage local data to drive local decisions and improve local performance – not to create another huge data resource that at best adds to the depth of overall business intelligence.
So where, exactly, is the data value from IoT?
Rethinking IoT
It is time to take a huge step back and consider: ‘what is the business attempting to achieve with IoT?’ The goal is to enable better, faster and more informed local decision making – both automated through machine learning and to provide individuals on the front line with the information they need to make instant decisions.
And that means analysing data where it is created. Certainly, the concept of edge computing is gaining ground – with the goal to better manage the IoT data where it is created and only transmit the most relevant data to the centre for analysis to address bandwidth challenges. But what is the value of simply managing data at the edge? Yes, that addresses the data transfer issues but how does it support any of the essential real-time decision making required to drive tangible value from the, often significant, IoT investment?
It is the ability to analyse data at the edge – effectively on site – that opens the door to significant new opportunities. Just consider the value of providing each individual petrol station manager with immediate insight into the operational performance of that petrol station in real- or near real-time. Or the highly skilled petrochemical engineer working on a remote site, who can make critical data driven decisions based on the actual events occurring. The ability to collect and analyse data at the edge fundamentally changes the way IoT can be leveraged – and provides the opportunity for IoT deployments to, finally, realise their business goals.
Cloud Foundry Foundation’s Executive Director Abby Kearns comments:
Whether a startup, an established organisation or Fortune 500, more enterprise organizations than ever are analyzing their current technology portfolio and defining a cloud strategy that encompasses multiple cloud platforms, whether as part of a disaster recovery situation or to take advantage of the financial differences between hosting applications in different cloud infrastructures.
Our recent research showed that technologies are today being used side by side more than ever before. IT Decision Makers report 77 percent are using or evaluating Platforms-as-a-Service (PaaS), 72 percent are using or evaluating containers and 46 percent are using or evaluating serverless computing. More than a third (39%) of IT decision makers use PaaS, containers and serverless side-by-side for the flexibility they need, thanks to these cloud technologies’ interoperability. This multi-platform strategy applies to developing both new cloud native apps and refactoring legacy apps.
How is this critical to digital transformation? More and more enterprises are becoming software-led. Car companies are moving towards autonomous cars full of software, the hotel industry is being disrupted by AirBnB. In other words, countless industry sectors are being – or have been – disrupted by software-led companies. More established companies need to integrate digital technologies into their business practices to modernize the way they operate. Cloud applications enable direct interaction with customers, and this new model calls for changes to the way code is written, deployed, and updated, how it’s kept secure, and how it’s scaled.
By decoupling application development from infrastructure, organizations can make a business decision on where to host workloads – on premise, in public clouds, or in managed infrastructures. Organizations can leverage the cloud platform that suits specific app workloads, and move those workloads as they see fit, providing flexibility, consistency, and choice.
IT Decision Makers are becoming more agile in their usage of cloud native tools, as they search for a suite of technologies that function interoperably with their current environments but that are flexible enough to address their needs in the future. This multi-platform approach is likely to accelerate over the coming months as companies seek the best of all worlds.
Cloud computing is well established as the go-to technology solution for the enterprise. Analyst firm IDC predicts that by 2020, 60% of all IT expenditure will be cloud based. Even if they haven’t deliberately implemented it yet, with Shadow IT an ongoing reality, most businesses will already use some cloud software as employees choose digital solutions that they are familiar with.
By Tom Adams, Director of Product Marketing, CogecoPeer1.
Each business ends up taking a unique and complex route on their journey to the cloud. However, it isn’t a singular destination – different workloads will likely move at different times and potentially to different platforms. A business is unlikely to ever complete its journey due to the changing nature of business priorities and the advances in technology.
Cloud computing is a crucial catalyst which can enable business and digital transformation. Cloud has the potential to underpin everything a business does and when managed right, can make a company more agile, more efficient and support growth. In order to have a successful journey to the cloud, there are five key steps a business should follow:
Step 1: Find your North Star
The most vital element of a cloud migration strategy is to understand the key business drivers. This could be anything from capacity needs, security or software end of support. Is the decision to migrate one, some, or all applications to the cloud a business trigger or a technology trigger? If it’s a hardware refresh, this will be a completely different set of objectives for migration than considering a business angle of, for example, deploying a new service to customers or entering a new geographic market.
Assess, migrate, optimise: this is the basis for the journey to the cloud. Consider:
In answering these questions, and fully assessing the business needs for migration, it will enable the company to create sound, solid business objectives that act as the North Star for the whole project.
Step 2: Explore the drivers
The drivers are always, without fail, complex. No company has a simple ‘we’re going to migrate’ approach. Some companies have started the journey before they realise it with SaaS applications and, as mentioned, shadow IT. The drivers could be any of the following, and many more besides:
Cloud is not a straightforward journey – a business is apt to take some side trips or rests along its way. By doing a full assessment, then exploring, examining and pulling apart the drivers, the person leading the cloud journey should be able to keep the company on track to its North Star.
Step 3: Ignore the people factor at your own risk
Alongside technology, people are at the heart of any transformation. With any significant change you will encounter naysayers - those who don’t see the value in the proposed change and/or think everything is fine as is or those that think they know a better path. There are lots of points of view to consider. This is where the cloud journey leader needs to take charge and do so with the backing of senior management. Putting in place some simple change management steps will go a long way to smoothing the path forward:
Having the right people involved is a vital component to a successful journey as well as ensuring all stakeholders are heard. A change management program allows people from all levels to get involved. It limits the risks of shadow IT creeping in due to a lack of understanding or awareness.
Step 4: Consider the six Rs
Once the drivers are understood and the concerns have been worked through, it’s time to decide on the type of journey the business wants to embark on. There’s lots written about the various stages, but here are six basic approaches.
Step 5: How to migrate with confidence
The biggest step is the first one: understand the business objectives fully. Decide on your North Star and this will keep you on track throughout the whole process.
Planning, planning, planning – make sure discovery and assessment is a key part of the project – e.g. which workloads are going to be moved? But, don’t spend months and months in this phase and forget the flexibility! The cloud journey leader needs to use a holistic approach to ensure solution analysis and architecture encompass application availability. Additionally, they need to ensure the right people are involved in the journey:
Consider the personnel aspects – is there a high level of expertise in house? Does the cloud journey manager have the skills to see this through in its entirety? Even if they do – do they have the bandwidth? If not, partner with experts who can guide the business on this journey. It’s worth noting that while a typical business will likely only perform a cloud migration a handful of times, a well-chosen partner will bring to bear years of experience across diverse environments and customer challenges.
What’s next on the journey?
Cloud computing is coming for every business - if it hasn’t happened already. There isn’t a ‘copy and paste’ approach which will work for all businesses. Each business’ journey to the cloud will be determined by their unique position, existing adoption levels and appetite for change. The most important thing is to identify and understand the North Star – success should be pinned against this. From there, the twists and turns of each journey are to be explored as they come.
Hybrid IT infrastructure - the only game in town? And how important is the edge going to become? The DW March issue includes a Special Focus on Hybrid IT and the Edge, with a mixture of articles and comment from leading figures working in the IT industry. Part 10.
says Marcin Bednarz, Product Manager, Telco at Canonical - the company behind Ubuntu.
Just 10 per cent of enterprise-generated data is developed and processed outside of a traditional centralised datacentre or cloud, according to Gartner. However, this is set to rise to 75 per cent by 2022 – as mobile and embedded devices increasingly feature greater amounts of computing power, thanks to the introduction of VR, AR and AI. As they move towards the future, telcos have huge revenue expectations for these disruptive technologies. Many say that the large quantity of data these technologies produce should be managed on-location. The spike in data generated and managed at the edge has triggered discussions around whether this means the end of the cloud era for telcos.
As with the introduction of any breaking technology, apprehension around cannibalisation is unreasonably high. The same is true for edge and cloud computing. The reality is that these two will not only co-exist, but collaborate to mitigate varying data tasks, facilitating a portfolio of new technical capabilities. Several operators have thrown their weight behind edge computing, including Reliance Jio, Telstra, AT&T and Deutsche Telekom, among others.
Telcos will benefit greatly from combining the two technologies to open both real-time applications and the analysis of large data sets for critical data mining. With an increasingly demanding customer base, and amid fierce inter-operator competition, a unified edge-cloud computing strategy will provide new use cases that can be easily deployed across different markets and scaled to meet network requirements.
To support telcos in determining how they divide tasks between the edge and the cloud, there are three factors which they will need to consider:
How fast should it be?
Edge computing has been built up as the next step towards removing latency. To guarantee that emerging technologies like VR and AR work seamlessly and decrease latency, telcos must push compute and storage resources closer to the end-user. Similarly, with the huge stress that these activities will place on the network, operators should consider bandwidth optimisation tools, such as edge, to help dodge critical bottlenecks on the network.
How cost-effective is it?
As data is now being created at an unparalleled rate, telcos must look at how economical it is to transfer data from the edge to the cloud and look at the cost effectiveness of pre-processing data locally. Appropriate workloads, such as those that aren’t subject to challenging latency needs, should carry on being served by the most optimal cloud solutions possible to save bandwidth costs.
Where should it be stored?
As cities become smarter, autonomous sites and local content caching more common, data legislation has never been so granular. As telcos expand their reach, data governance is a growing consideration which will make edge computing essential for developing telcos.
Marrying the cloud and edge
Considering this, operators are progressively attempting to unlock the potential of edge through
“cloudifying” their infrastructure. This is happening in two ways. The first is though telcos acquiring edge compute power from cloud providers. Collaborating with open infrastructure cloud providers directly, operators can develop a flexible architecture which provides many layers of edge suited to a wider portfolio of use cases. The second is by partnering with hyperscale data centres and co-location providers to speed up the delivery of edge-supporting infrastructure.
In any case, edge computing begins with establishing a physical infrastructure. Within this, open source tools, like Metal as a Service (MaaS), deliver cloud style provisioning. This can gift telcos much greater
agility from their servers to manage multiple types of edge workloads designed for bare metal, virtual machines or fully containerised. The use of cloud management tools by the industry is still in the early stages. Partnering with a cloud solutions provider represents a huge opportunity to manage infrastructure from bare metal up to the cloud, while relying on current distributed network assets to strategically place latency dependent functions nearer to data sources and users. Through the exploitation of their own network, existing distributed infrastructure and customised network APIs, telcos are distinctively placed to deliver on-premises computing.
With competition among operators at fever pitch, an overarching approach to edge will support vendors in increasing network agility, driving innovative new use cases at the edge and reducing the costs linked to creating these new services.
Julia Fraser, Vice President Sales, UK & Ireland, CenturyLink, comments:
“A hybrid network is essential to a smooth digital transformation. Central to any digital transformation strategy is the network. This is the nervous system for the entire organisation. Employing an agile, secure and reliable network is key to achieving a connected enterprise model, which seamlessly brings together people, processes and devices – no matter where they are. Getting this fundamental platform right simplifies the road to digital transformation. The ideal path to reach this goal is through hybrid networking – the convergence of on-premises IT, private and public cloud services.
Hybrid networking is a comprehensive process that evaluates business requirements for specific IT workloads, applications and data to determine the optimal networks, whether cloud, enterprise or both. A hybrid network environment affords businesses five critical benefits:
1. Automated access to cloud services – businesses need to be able to deploy cloud resources quickly and efficiently. Previously when employees wanted to use additional IT resources, they would have to access hardware and software from a centralised IT department. Now employees can head straight to the cloud for what they need speedily, without bring constrained by limited IT budgets and resources.
2. Retain low-cost workflows on-premises – often, it makes perfect sense to retain relatively static workflows on low-cost, on-premises data centres using commodity hardware. This hybrid approach allows IT departments to focus on innovation, without having to spend time migrating low-value applications to the cloud for minimal benefit.
3. Integrate IT workloads into a single security perimeter – IT workloads running in separate environments rely on different routing strategies, IP addressing schemes, quality-of-service guarantees and security protocols. However, to achieve optimum performance of workloads, while ensuring security and reliability, a tight integration between the infrastructure provider and the network provider is required.
By incorporating cloud workflows into the same security framework as on-premises workflows, enterprises can better protect themselves against vulnerabilities and breaches. It’s also easier to demonstrate compliance to regulators and auditors when there’s a single and consistent framework with overriding security.
4. Reduce IT spending – historically, physical infrastructure for networking services such as network-based security and DDoS mitigation required significant capital expenditures, along with ongoing operating expenses and costly management overheads.
Today, enterprise networks are becoming much more dynamic Leading network providers are extending the on-demand nature of cloud purchasing to include network-based services. Businesses can add features on-demand, without needing to purchase and install dedicated appliances.
5. Free up IT innovation – streamlined access to cloud-based resources help the IT department deliver higher-value solutions to the business, taking advantage of the incredible innovation happening with services available on cloud platforms.”
Defining ‘digital transformation’ is important in answering whether or not hybrid IT is essential to the process, says Marc Bouteyre – Senior Product Line Manager of SD-WAN at Ekinops.
Greater automation, not clear-cut virtualization, far more accurately encompasses what the industry is ultimately trying to achieve. Enabling systems to process and programme more responsively what man-power alone simply cannot is essential to meeting rising service demands. By that definition, hybrid IT solutions are by far the most compelling, and currently the most feasible, stepping stone towards it.
Navigating the challenges of upgrading cumbersome, expensive to replace and automation resistant legacy equipment has been the goal, and central to the success of, hybrid IT. Pure SDN is the perfect answer to the digital transformation problem, but with resources squeezed, hybrid solutions have offered access to automation that can run over existing infrastructure. Service providers are already playing catch up to deliver new services – the choice of hardware that delivers these matters very little to them.
The networking landscape is increasingly application-centric and end-customer driven. For CSPs, taking the best part of a year to deliver new services simply doesn’t cut it anymore. Automation is fundamental to removing complexity and accelerating the delivery of innovative new services.
The popularity of SD-WAN at the CSP is a great example of hybrid beating ‘pure virtualization’ to the post. Flexibility, stacks of programmability, quicker installation and service delivery times - it taps into what ultimately affects service providers’ bottom-lines: time to market and productivity. But the real beauty of SD-WAN fails if it’s not truly open.
Openness is essential to digital transformation. Service providers are no longer the decision makers – at the whim of enterprise customers, they need to react to what services are needed, demanded and consumed. Championing openness enables providers to achieve the level of programmability and flexibility needed to deliver a whole host of services – virtualized or not - on legacy equipment. What’s more, the investment in openness can offer a safeguard for future innovation.
NETCONF/YANG are good examples of open protocols that can enable programmability over more cost-effective pCPE equipment so CSPs can begin reaping the benefits of automation now, maintaining flexibility for full-blown virtualization down the line.
The last decade has seen the network world governed by the capacity/quality pendulum. First we had the race for bandwidth, then the race for quality control and application steering, and with 5G on the horizon, the pendulum is swinging back toward capacity again. Customers don’t care about underlying infrastructure, they want new services delivered now.
In the future, we’ll undoubtedly see a more balanced, holistic merging of the edge, optical, the cloud and network access. A slowing of the pendulum, if you like. By that I mean more balanced services, seamless access to WiFi hotspots, mobile data and private clouds all at once and on-demand.
Digital Transformation has become a bit of a cliché - every company has been looking at how IT can give it a competitive edge and improve business longevity, says Paul Timms, MD at MCSA.
Nowadays, when it comes to the IT underpinning businesses, we are seeing a wide variety of strategies employed by different organisations. Hybrid IT has evolved from the need for a mixture of different technologies to make best use of digital for individual businesses, one size certainly does not fit all.
The evolution has seen vendors being much clearer about the benefits and case studies around their particular technology. We have seen HPE pitch its hardware much more at Managed Service Providers (MSPs) as the “easy to manage and secure” option, while Microsoft has been pitching about how integrated the Azure and Office 365 solution is and easy to set-up for MSPs in an attempt to compete with AWS.
Then we have private clouds - IT on British soil that you can point at but users consume on an op-ex model. We have seen this model really work for businesses who are not subscribing to the full outsource model just yet. Many companies still like to be in control of major cost items, which IT increasingly is for many businesses. Switching to the conveyor belt of full cloud would stop them from being able to sweat their IT estate in times of a downturn or just because they want to save some cash.
Businesses should turn to an IT service provider who can demonstrate with real world examples of what the pros and cons would be of using an outsourced IT service so that they can focus on their core business.
There is so much choice for CIO/CTOs to consider now, and whilst Hybrid IT has certainly broadened everyone’s horizon, providing more and more IT system options to suit business needs can also be a hinderance to businesses’ digital transformation. More choice leads to more confusion. This could be the catalyst to a simpler ‘digital transformation for all’ solution, a further evolution towards the possibility of a one-size-fits-all system.
In this shifting IT landscape, IT service providers must ensure they still deliver value to customers by giving advice and opening minds to opportunities that are out there to help them navigate their own digital transformation journey.
Is hybrid infrastructure necessary for digital transformation?
The answer will vary between organisations, according to Chris Adams, President & Chief Executive Officer, Park Place Technologies.
Some SMBs may find a cloud-only approach preferable. Buying everything “as a Service” (aaS) can limit complexity and investment in IT expertise. It’s especially viable for organisations willing to adapt their processes to available off-the-shelf cloud solutions.
Most larger enterprises and many SMBs, however, will find mixing and matching aaS solutions more difficult and costly to sustain. Organisations that have extensive investments in legacy CRM and ERP systems often find them difficult and risky to transition to the cloud, yet they remain essential as a “single source of truth”. This is a good argument for hybrid infrastructure to retain the value of these systems, while also tapping cloud capabilities.
There are other use cases as well. Long-term public cloud storage becomes expensive, and transmitting large data sets can bog down networks. Depending on the application, it can therefore be beneficial to house certain data in-house. There is also a strong argument for maintaining full control of systems supporting a company’s differentiating technologies.
As edge computing gains ground, we may see similar bifurcation. Many market watchers expect telecoms to leverage cell tower properties to offer edge solutions. Although the business model is far from settled, one can imagine purchasing edge capabilities in a manner akin to mobile data plans. This would allow various organisations to leverage edge without building out their own infrastructure.
The Industrial Internet of Things (IIoT), on the other hand, provides various examples where custom-built edge capabilities may be the right answer for bringing compute power and split-second decision-making to the myriad devices monitoring—and increasingly, automating—manufacturing lines, oil pipelines, and more.
When it comes to IT, it’s nearly impossible to prescribe a “one size fits all” solution. Infrastructure and operations professionals will therefore need to continue to do the hard work of evaluating the organisation’s needs and adapting an architecture to suit.
In recent years we’ve seen the edge move from a growing trend or buzzword, to become a well-established driver of innovation. This is true within on, and off, premise distributed IT applications, micro data centres and new use cases of Artificial Intelligence (AI).
By Marc Garner, Vice President, IT Division, Schneider Electric UK & Ireland.
For many of today’s businesses, the adoption of edge is an evolving and complex reality. One that brings with it brings with it growing challenges of uptime and management. But due to the impacts of latency, application availability and prolific data generation, companies all the way from the consumer to the enterprise have had no choice other than to embrace the technology and begin to deploy more critical infrastructure, in the remotest of locations.
From a management perspective the distributed environment presents additional complexities, including skills shortages and manpower. One cannot for example, have a maintenance specialist on site at every edge location, therefore a collaborative approach to design, pre-integration and deployment, in addition to use of Cloud-based management software, has become crucial to ensuring minimal downtime.
Another challenge is that of the customer, where a shift in expectations has seen the industry, and many Vendors alike, move from working in a silo’d and competitive way, to becoming dependent on one another’s skillsets. Something in many respects, that one might consider an extremely positive breakthrough.
For many, the collaboration of data centre infrastructure Vendors whom supply rack architectures, power, cooling, and physical security, with superfast blade servers from IT providers, backup from Storage companies and hyper-converged software developers, has made the edge a reality.
Predictability and reliability of course continue to play a crucial role within todays IT environments. So with the increase in collaboration, edge solutions have become more standardized, more integrated and pre-tested, thereby exceeding customer expectations and meeting faster deployment times, whilst promising that the infrastructure will work as planned once delivered.
Another of the challenges associated with the edge is its ecosystem. How does one define ‘an edge’, and what indeed constitutes ‘an edge solution’?
According to the infrastructure masons, “An Edge location is a computing enclosure/ space/ facility geographically dispersed to be physically closer to the point of origin of data or a user base. For an Edge to exist there must be a hub or a core; therefore, dispersion of computing to the periphery would qualify as ‘Edge computing’ and the physical enclosure/ space/ facility can be defined as the ‘Edge facility’.”
Research firm IDC however, states that, “Edge computing is a “mesh network of micro data centres that process or store critical data locally, pushing all received data to a central data centre or cloud storage repository, in a footprint of less than 100 square feet.”
Confused about the definition? You could be among the many, but what is perhaps becoming clearer is that the edge can no longer be seen as a single micro data centre, an on-premise critical infrastructure solution or even a small colocation data centre.
This intricate edge ecosystem is made up three things; firstly of localized or on-premise micro data centres; secondly, mid-sized regional facilities and thirdly Hyperscale campuses, those used by the Internet Giants to provide large Cloud cover and data backup.
Indeed wherever you sit within the industry, the role of the edge will be different. But perhaps, the edge has simply become the next evolution of physical infrastructure.
What we know is that the amount of data being generated shows no signs of slowing, especially with the discussion around 5G and it’s reliance on edge computing as an enabler. Gartner for example, predicts that by 2025, 75% of enterprise-generated data will be created and processed outside of a traditional data centre or Cloud.
So with data a rapidly growing and high value commodity, and connectivity the river of gold on which it flows, how does one harness the edge to embrace evolution within the enterprise environment?
Schneider Electric believes that in order to find success there are three key enablers for the edge, they include; remote management with simplified and secure monitoring, 24/7 visibility and predictive analytics; greater physical security to prevent unauthorised access to IT equipment; and rapid deployment to ensure a standardised, repeatable and quick to deploy approach.
Software to manage this complex environment has evolved from use of traditional data centre infrastructure management solutions (DCIM), to Cloud-based Data Centre Management as a Service (DMaaS) offers, which provide simplified remote monitoring, with real-time visibility of any IoT-enabled infrastructure solution, 24/7, anywhere, on any device.
Combined with the power of Big Data analytics and Artificial Intelligence (AI), the software delivers predictive insights into IT issues, enabling timely and proactive servicing to be performed. Given the dispersed nature of today’s edge and hybrid IT environments, coupled with a move towards more digitally driven servicing, DMaaS becomes critical when ensuring reliability, uptime and availability at the edge.
As IT requirements continue to evolve, it’s hard to predict how the data centre of tomorrow will look. One thing is for sure, that data will continue to be a high value commodity, therefore both physical and cyber security will play a crucial role.
In order to adapt to this data driven and digitised environment, it will also be crucial to embrace the edge ecosystem as a whole. Collaboration will of course be key, but so will new and emerging technologies such as Cloud-based software, 5G, AI, Machine Learning (ML) and Liquid Cooling.
Ultimately, failure to embrace the edge will mean that businesses will fail to plan for the future, and as a key enabler of digital transformation, harnessing the power of the edge is the only way to remain competitive and ensure longevity.