Dealing with change is one of the hardest situations one has to deal with in life – whether it be at work or at home. For most of us, the temptation to carry on doing what we’ve always been doing usually wins over any of the potential benefits of making adjustments. The hassle and the expense of replacing what we know and love with something that just might be better seems a step too far. The new ideas and technology need to offer major cost and/or lifestyle benefits before we ‘take the plunge’ and we each have a different idea of the risks/rewards involved. Furthermore, there are some aspects of our home and work life where we may well be prepared to investigate the new, but there are other areas where we’ll take a whole deal of persuading.
As a rule, the biggest obstacle to change is our familiarity and comfort with the status quo. We may not be quite so blinkered as to say ‘ I’ve always done it this way…’ but we may well be thinking that ‘if it ain’t broke, don’t fix it’. Colleagues, friends and family might cajole us into changing old habits, but the conflict that needs to be resolved is that we know our own lifestyle best, but we might not know the new solutions well. In other words, bringing in a consultant (or life coach!) to try and make us change may not be very helpful as they do not understand how our lives work or how the office functions, although they do know all about the latest ideas and gadgets.
However, there are times when, deep down, we know that we need to change. The businesses that kept faith with canals as their preferred method of transport lost out to those who embraced the combustion engine, and companies that might still rely on paper and pen for correspondence are unlikely to survive in the age of digital communications.
And I think we’ve reached one of those significant milestones right now. Love it or hate it, the digital age is here to stay (at least until we run out of power or cables or places to store all our data!), so it’s time to recognise that, at the very least, the old ways of doing business need to be enhanced by the new and, in most cases, be replaced by the new.
In order to achieve this successfully, we need to take time to understand what’s out there and how it can be applied to our businesses – there’s very little point in paying a third party vast sums of money to put together a digital transformation project unless you have pockets deep enough for this third party to spend long enough inside your business to understand exactly how it works.
So, there are no short cuts to digital success, just the ‘fortitude’ and resilience required to embark on the digital transformation journey for oneself, embracing the opportunity rather than distancing oneself from it so that, when obstacles are encountered (which they will be), you don’t seek to sidestep the blame, rather acknowledge them and work out a solution before continuing on the digital journey. Best of all, if you’ve planned properly, you might actually manage to avoid too many howlers along the way.
As many organizations want to support mobile, team-oriented and nonroutine ways of work, an increasing number of them are looking for assistance in adopting digital workplace technology. A Gartner, Inc. survey* concluded that only 7 percent to 18 percent of organizations possess the digital dexterity to adopt new ways of work (NWOW) solutions, such as virtual collaboration and mobile work.
An organization with high digital dexterity has employees who have the cognitive ability and social practice to leverage and manipulate media, information and technology in unique and highly innovative ways.
By country, organizations exhibiting the highest digital dexterity were those in the U.S. (18.2 percent of respondents), followed by those in Germany (17.6 percent) and then the U.K. (17.1 percent). "Solutions targeting new ways of work are tapping into a high-growth area, but finding the right organizations ready to exploit these technologies is challenging," said Craig Roth, research vice president at Gartner.
In parallel, the survey found that workers in the U.S., Germany and U.K. have, on average, higher digital dexterity than those in France, Singapore and Japan (see Figure 1).
Workers in the top three countries were much more open to working from anywhere, in a nonoffice fashion. They had a desire to use consumer (or consumerlike) software and websites at work. Some of the difference in workers' digital dexterity is driven by cultural factors, as shown by large differences between countries. For example, population density impacts the ability to work outside the office, and countries with more adherence to organizational hierarchy had decreased affinity for social media tools that drive social engagement.
Figure 1.Openness to Digital Dexterity by Country - Source: Gartner (June 2018)
Older Workers Are Second-Most Likely Adopters of NWOW
As expected, the youngest workers are the most inclined of all age groups to adopt digital-workplace-driven products and services (see Figure 2). They have a positive view of tech in the workplace and a strong affinity for working in nonoffice environments. Nevertheless, they reported the lowest levels of agreement with the statement that work is best accomplished in teams.
Figure 2.Digital Dexterity Likelihood by Employee Age - Source: Gartner (June 2018)
The survey also showed that the oldest workers are the second most likely adopters of NWOW. Those aged 55 to 74 have the highest opinion of teamwork, have progressed to a position where there is little routine work, and have the most favorable view of all age groups of internal social networking technology.
Workers aged 35 to 44 were at the low point of the adoption dip, potentially feeling fatigued with the routines of life as middle age approaches. They were most likely to report that their jobs are routine, have the dimmest view of how technology can help their work, and are the least interested in mobile work.
Larger organizations on average had higher digital dexterity than smaller ones. "Embracing dynamic work styles, devices, work locations and team structures can transform a business and its relationship to its staff. But digital dexterity doesn't come cheap," said Mr. Roth. "It takes investment in workplace design, mobile devices and software, and larger organizations find it easier to make this investment."
As many IT workers develop greater technology skills and apply them to advance their careers, many digital workers in non-IT departments believe their CIO is out of touch with their technology needs. A Gartner, Inc. survey found that less than 50 percent of workers (both IT and non-IT) believe their CIOs are aware of digital technology problems that affect them.
The survey further revealed that European workers said that their CIO is more aware of technical challenges (58 percent) than U.S. workers believe they are (41 percent).
"Non-IT workers aren't likely to use the IT help desk as their first source of assistance, and are less likely to believe in the value of their IT organization," said Whit Andrews, vice president and distinguished analyst at Gartner. "Only one in five non-IT workers would ask their IT department to supply best practices for employing technology."
The survey also revealed that millennials were less likely to approach IT support teams through conventional means. About 53 percent of surveyed millennials outside the IT department said that one of their first three ways to solve a problem with digital technology would be to look for an answer on the internet.
Non-IT workers were overall more likely than IT workers to express dissatisfaction with the technologies supplied for their work. IT workers express greater satisfaction with their work devices than do workers outside IT departments. Only 41 percent of non-IT workers felt very or completely satisfied with their work devices, compared to 59 percent of surveyed IT workers.
"Many IT departments will be more successful if they are able to provide what workers say they need, and provide inspiration so they can increase the workforce's digital dexterity," Mr. Andrews added.
IT Workers Feel More Confident
IT workers feel more confident than non-IT workers at using digital technology. The survey found that 32 percent of IT workers characterized themselves as experts in the digital technologies they use in the workplace. Just 7 percent of non-IT workers felt the same. "While we expect IT people to feel more confident with digital technologies, these findings highlight how hard it is to help non-IT workers feel as digitally dexterous as IT workers do," said Mr. Andrews.
Sixty-seven percent of non-IT workers feel that their organization does not take advantage of their digital skills. "Organizations seeking to mature and expand their digital workplaces will find that expanding digital dexterity will accelerate this across the organization," added Mr. Andrews.
Digital Technology Satisfies 72 Percent of Digital Workers
About three in four digital workers either somewhat agree (48 percent) or strongly agree (24 percent) that the digital technology their organization provides enables them to accomplish their work.
The most common types of workplace application used by survey respondents were real-time messaging (58 percent), sharing tools (55 percent), and workplace social media (52 percent — see Figure 1).
Figure 1. The Shape of Worker’s Days - Source: Gartner (June 2018)
However, significant distinctions exist in the workplace. "Millennial digital workers are more inclined than older age groups are to use workplace applications and devices that are not provided by their organization, whether they are tolerated or not," said Mr. Andrews. "They also have stronger opinions about the collaboration tools they select for themselves. They are more likely to indicate they should be allowed to use whatever social media they prefer for work purposes."
In addition, relative to the total workforce, a larger proportion of millennials consider the applications they use in their personal lives to be more useful than those they are given at work. "Our survey found that 26 per cent of workers between the ages of 18 and 24 use unapproved applications to collaborate with other workers, compared with just 10 per cent of those aged between 55 and 74," Mr. Andrews said.
A significant shift toward digital business models that harness technology trends such as cloud computing, Internet of Things (IoT), analytics and artificial intelligence (AI) is boosting worldwide spending on application infrastructure and middleware (AIM). Gartner, Inc. numbers show that AIM market revenue reached $28.5 billion in 2017, an increase of 12.1 percent from 2016 (see Table 1).
The wider technology trends driving the AIM market are commonly accepted: migration to cloud platforms and services, ever-increasing demand for near-real-time data and analytics, a shift toward an API economy, rapid proliferation of IoT endpoints, and deployment of AI.
"A new approach to application infrastructure is the foundation organizations build their digital initiatives upon, and therefore robust demand in the AIM market is testament to the occurrence of digitalization," said Fabrizio Biscotti, research vice president at Gartner. "The more companies move toward digital business models, the greater the need for modern application infrastructure to connect data, software, users and hardware in ways that deliver new digital services or products."
Table 1. AIM Software Market Share by Revenue, Worldwide, 2016 and 2017 (Millions of Dollars) - Source: Gartner (June 2018)
Gartner forecasts that the AIM market will grow even faster in 2018, after which spending growth will slow each year, reaching around 5 percent in 2022. Moreover, momentum in the AIM market is shifting from market incumbents to challengers.
Licensed, on-premises application integration suite offerings that make up larger segments served by market incumbents such as IBM and Oracle achieved single-digit growth in 2016 and 2017. Gartner expects this growth to continue through 2022. "We can generally describe the products in this slow-growing segment as serving legacy applications," said Mr. Biscotti.
Small challenger segments — built predominantly around cloud and open-source-based application integration (iPaaS) offerings — will continue to enjoy double-digit growth.
"In iPaaS we find the groundwork being laid for a digital future, as the products in this segment generally are lighter, more agile IT infrastructure suited for the rapidly evolving use cases around digital business," said Bindi Bhullar, research director at Gartner. "The result is that well-funded, pure-play iPaaS providers, open-source integration tool providers and low-cost integration tools are challenging the dominant position of traditional vendors."
The iPaaS segment is still a small part of the overall market, topping $1 billion in revenue for the first time in 2017 after growing over 60 percent in 2016 and 72 percent in 2017. This makes iPaaS one of the fastest-growing software segments.
"The iPaaS market is also starting to consolidate, most notably with Salesforce's recent acquisition of MuleSoft," said Mr. Bhullar. "There is still a lot of room for further consolidation, with more than half the AIM market held by vendors outside the top five. This "others" segment is enjoying double-digit growth, which is likely to encourage acquisitions from big players losing market share to challengers."
Mr. Biscotti added that the most successful challengers in the AIM market will be those that position their products as complementary to — rather than replacements for — the existing legacy software infrastructure that is common in most large organizations.
"While new agile challengers may seem better fits for those pursuing digital initiatives, the underlying reality is that legacy middleware and software integration platforms will persist," said Mr. Biscotti. "Pure-play cloud integration is a niche requirement today — most buyers have more extensive requirements as they pursue hybrid integration models. The long-term market composition is likely to consist of a broad spectrum, from generalist comprehensive integration suites to more-specialized fit-to-purpose offerings."
Worldwide shipments of augmented reality (AR) and virtual reality (VR) headsets were down 30.5% year over year, totaling 1.2 million units in the first quarter of 2018 (1Q18), according to the International Data Corporation (IDC) Worldwide Quarterly Augmented and Virtual Reality Headset Tracker. Much of the decline occurred due to the unbundling of screenless VR headsets during the quarter. For much of 2017, vendors bundled these headsets free with the purchase of a high-end smartphone, but that practice largely came to an end by the start of 2018. Despite a poor start to 2018, IDC anticipates the overall market will return to growth over the remainder of the year as more vendors target the commercial AR and VR markets and low-cost standalone VR headsets such as the Oculus Go make their way into stores. IDC forecasts the overall AR and VR headset market to grow to 8.9 million units in 2018, up 6% from the prior year. That growth will continue throughout the forecast period, reaching 65.9 million units by 2022.
"On the VR front, devices such as the Oculus Go seem promising not because Facebook has solved all the issues surrounding VR, but rather because they are helping to set customer expectations for VR headsets in the future," said Jitesh Ubrani senior research analyst for IDC Mobile Device Trackers. “Looking ahead, consumers can expect easier-to-use devices at lower price points. Combine that with a growing lineup of content from game makers, Hollywood studios, and even vocational training institutions, and we see a brighter future for the adoption of virtual reality."
When it comes to augmented reality headsets, many consumers have already had a taste of the technology through screenless viewers such as the Star Wars: Jedi Challenges product from Lenovo. IDC anticipates these types of headsets will lead the market in shipment volumes in the near term. However, non-smartphone-based AR headsets should begin to see greater market availability by 2019 as commercial uptake continues to rise and existing brands launch next-generation products. IDC predicts triple-digit growth in this space between 2019 and 2021.
"Momentum around augmented reality continues to grow as more companies enter the space and begin the work necessary to create the software and services that will drive AR hardware," said Tom Mainelli, program vice president, Devices and Augmented and Virtual Reality at IDC. "Industry watchers are eager to see new headsets ship from the likes of Magic Leap, Microsoft, and others. But for those devices to fulfill their promise we need developers creating the next-generation of applications that will drive new experiences on both the consumer and commercial sides of the market."
Category Highlights
Many consumers' first experience with an Augmented Reality headset will be in the form of a screenless viewer. While large movie properties such as Star Wars helped move significant volumes of these headsets the past holiday season, uptake for the remainder of the year is likely to slow as the headsets have limited functionality beyond their core applications. In the latter years of the forecast, IDC expects such products to decline in relevance, although they are likely to remain in the market, often sold as toys. Meanwhile, standalone AR head-mounted displays (HMDs) should grow to reach 194,000 units in 2018 and will experience a compound annual growth rate (CAGR) of 190.9% over the five year forecast. More advanced headsets such as Microsoft's Hololens and Magic Leap's One will help drive adoption in the commercial and consumer markets. Finally, tethered headsets will grow with a five-year CAGR of 241.8%. This last category will be the eventual home of products with lower-cost headsets based on Apple's ARKit and Google's ARCore that tether to smartphones or tablets.
IDC forecasts Virtual Reality headsets to grow from 8.1 million in 2018 to 39.2 million by the end of 2022, representing a five-year CAGR of 48.1%. While many think of VR as a consumer technology, IDC believes the commercial market to be equally important and predicts it will grow from 24% of VR headset shipments in 2018 to 44.6% by 2022. From a platform perspective, the market has been dominated by Oculus largely due to the initial volumes around Samsung's Gear VR. This will likely continue in the near term as the Go brings VR to more consumers. However, the Oculus platform is likely to face pressure from both HTC's VIve platform and Microsoft's Windows Mixed Reality platform. The latter should see strong opportunities in the commercial market as brands such as HP, Dell, and Lenovo bring their years of experience catering to enterprise buyers to the market.
* Note: Market share for 2018 and 2022 are forecast projections.
International Data Corporation (IDC) has published the latest Worldwide Semiannual Internet of Things Spending Guide (version 2H17). The Spending Guide forecasts Internet of Things (IoT) spending will experience a compound annual growth rate (CAGR) of 13.6% over the 2017-2022 forecast period and reach $1.2 trillion in 2022. The forecast is based on the latest research in the burgeoning IoT technology market, which offers business investment opportunities across a spectrum of industries and illuminated through use case implementations.
As the diverse IoT market reaches broad-based critical mass, innovative offerings in analytics software, cloud technologies, and business and IT services have expanded rapidly. "The IoT market is at a turning point – projects are moving from proof of concept into commercial deployments," said Carrie MacGillivray, group vice president, Internet of Things and Mobility. "Organizations are looking to extend their investment as they scale their projects, driving spending for the hardware, software, services, and connectivity required to enable IoT solutions."
The intersection of multiple technology domains is one key to successfully understanding and developing a supply-side product and market development strategy. The IDC IoT Spending Guide is an industry defining market intelligence tool that details end-user adoption and spending across multiple segmentations. "The latest IoT Spending Guide release fully aligns to IDC's Industry Taxonomy. We now forecast all 20 standard IDC Industries," said Marcus Torchia, research director, Customer Insights & Analysis. "As a result, we are proactively mapping IoT use cases that have segmentations in shared domains, such as in Smart Cities and Digital Transformation investment areas. As a part of these improvements, IoT supports spending forecasts for 100 use cases."
Forecast highlights show that the consumer sector will lead IoT spending growth with a worldwide CAGR of 19%, followed closely by the insurance and healthcare provider industries. From a total spending perspective, discrete manufacturing and transportation will each exceed $150 billion in spending in 2022, making these the two largest industries for IoT spending. From an enterprise use case perspective, vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) solutions will experience the fastest spending growth (29% CAGR) over the forecast period, followed by traffic management and connected vehicle security.
International Data Corporation's (IDC) EMEA Server Tracker shows that in the first quarter of 2018, the EMEA server market reported a year-on-year (YoY) increase in vendor revenues of 35.0% to $3.9 billion and a YoY increase of 2.0% in units shipped to 542,000. In euros, 1Q18 EMEA server revenues increased by 17.0% YoY to €3.1 billion. Exchange rate fluctuations impacted the strong dollar revenues recorded over the quarter, due to a higher euro/dollar value compared with 1Q17. The top 5 vendors in the region and their quarterly revenues are displayed in the table below.
Top 5 EMEA Vendor Revenues ($M)
Source: IDC Quarterly Server Tracker, 1Q18
At a product level, standard rack optimized, as the largest revenue generator, grew 45.2% YoY pushed by large deals in the U.K. and the Netherlands. Standard multinode shipments grew at a significant 251.4% YoY rate, driven largely by the U.K., Germany and the Netherlands. Another standout contributor to quarterly growth in EMEA were custom multinode units, which grew 107.3% YoY in terms of revenues. Higher average selling prices in the high-specification custom servers drove the improved performance in this product segment. Custom rack optimized was the only server segment to decline over the quarter, due to a continuing transition to custom multinode units providing superior performance specifications.
"The first quarter of 2018 saw the average selling prices (ASPs) of the top 5 x86 server vendors in western Europe increase by an average of 32% YoY. Central to this increase were fluctuations in exchange rates, but also increased attach rates, inclusion of Intel's new Skylake processors and the continued pressure felt by the demand for memory components," said Eckhardt Fischer, senior research analyst, IDC Western Europe.
In comparison to OEM vendors, original design manufacturers (ODMs) growth rates were significantly lower over 1Q18. The 1.4% YoY increase was primarily due to a drop in ODM hyperscale datacenter deals in Ireland, Finland, and Russia. ODM growth is expected to accelerate in late 2018 and 2019 with new cycles of datacenter launches for Apple, AWS, and Google.
Regional Highlights
Segmenting market performance at a Western European level, the U.K. and the Netherlands were standout performers in 1Q18. With 65.3% growth to $733.6 million, the former overtook Germany as the region's largest market, driven by strong growth for all the top five vendors. Discounting major hyperscale datacenter investments, the U.K.'s rapid growth may be attributed to a subsiding of Brexit fears that paused datacenter investment in the U.K. In the Netherlands, a 120.2% growth to $322.0 million was primarily the result of continued hyperscale datacenter investments made into the country. The Swiss server market was also a notable performer in the quarter, increasing 64.4% on the back of substantial IBM Large System growth. Finland was the only Western European country to experience a decline in revenue over the quarter, due to a lack of significant ODM Custom multinode and rack deals.
"Central and Eastern Europe, the Middle East, and Africa (CEMA) server revenue continued on its upward path in the first quarter of year 2018, increasing by 28.5% year-over-year to $688.36 million, despite the decline in terms of units," said Jiri Helebrand, research manager, IDC CEMA. "Ongoing product refresh cycle and maintained positive economic momentum were the main drivers for the strongest revenue growth recorded in the last ten years. Central and Eastern Europe (CEE) subregion grew by 24.3% year-over-year with revenue of $302.28 million led by strong demand for servers in the Czech Republic and Hungary, which observed demand from the public sector.
"The Middle East and Africa (MEA) subregion grew by 31.9% year-over-year to $386.09 million in 1Q18, and similar to CEE saw a decline in terms of units as both large businesses and the public sector consolidate their infrastructure and opt for richer configurations. Saudi Arabia and Turkey were the top performers, with the latter benefiting from an HPC deal in the public sector."
95% of global business decision makers face challenges when it comes to achieving a more successful digital strategy, including budget constraints, lack of visibility to manage the digital experience and legacy infrastructure.
The global survey, which includes responses from 1,000 business decision makers at companies with $500 million or more in revenue across nine countries, also found that while digital services and applications are critical to future business success, 80% of respondents reported that critical digital services and applications are failing at least a few times a month.
“This survey underscores the tremendous opportunity that maximizing digital performance can have on the user experience and bottom line, while simultaneously highlighting the real challenges companies face today,” said Subbu Iyer, CMO, Riverbed Technology. “The findings reinforce that forward-thinking companies are well positioned to lead their industries in the race towards digital transformation by prioritizing investments in modernizing their networks and tools to measure and manage the digital experience for their customers and employees. Those who hesitate to embrace digital strategies and processes will quickly fall by the wayside, and those who drive digital performance will see significant business outcomes.”
Awareness is High, Need is Immediate
The need for companies to provide a successful digital experience for customers, partners and employees is well recognized, and it continues to grow in importance. Some 91% of global business decision makers agree that providing a successful digital experience is even more critical to the company’s bottom line than it was just three years ago.
Likewise, 99% of global business decision makers believe their company would benefit from improving the performance of their company’s digital services and applications. They see this happening primarily through:
Hurdles to Implementing a Digital Strategy are Real
However, it is widely recognized that inadequately performing systems today are a key limitation to a successful digital strategy. In fact, of the 95% of global business decision makers who said they face significant challenges when it comes to achieving a more successful digital strategy, most cited multiple challenges including:
Accelerating technology cycles are impacting the workplace with unprecedented speed. By Matt Cain, vice president and distinguished analyst at Gartner.
Application leaders and business executives haven’t traditionally spent much time contemplating how work will change in years to come. That’s largely because the IT organisation has focused on operational excellence and due to the fact that over the past three decades, the pace of change in the workforce has been relatively slow and predictable.
Circumstances have changed. The IT charter is expanding to include a larger focus on individuals, teams and overall business performance, and accelerating technology cycles are rapidly increasing the pace of change in work patterns.
Digital business models and platforms are fundamentally restructuring how business is conducted. Cloud services are increasing the speed of technological change at a rate unthinkable in the days of on-premises deployment. At the same time, the nature of work is being transformed with new business patterns, such as the gig economy and flatter organisation models, while artificial intelligence (AI) is set to transform how work is carried out.
Application leaders need to anticipate the future of work to understand what IT skills are needed to support change, and ensure that technology aligns with future work patterns.
Below are three overarching future work trends expected in developed nations between 2022 and 2026, along with some of their key impacts.
Worker digital dexterity will become critical
No one knows exactly how this change will impact business, but one thing is certain — the digital dexterity of the workforce is the most effective mechanism to ensure that it can keep pace with and exploit vast amounts of change. Digital change will manifest itself in a number of ways, including:
AI will prevail
Converting rich input patterns into data that can be readily processed by conventional software is at the heart of today’s AI hype. AI will have a profound impact on how work is assigned, completed and evaluated. And although AI will provide a number of workplace trends in the coming years, workers are experiencing the impact of robobosses and smart workplaces right now.
The gig economy will thrive
Businesses will increasingly learn and borrow from freelance management and gig economy platforms, which dynamically match short-term work requirements directly with workers who possess the relevant knowledge, experience, skills, competencies and availability. This will mean moving away from traditional structures to more fluid arrangements.
In my experience there are a number of common blind spots associated with vendor risk management (VRM), or ‘third party risk management’ as it is sometimes called. In this article I will share with the readers what I see as six top misconceptions surrounding VRM and suggest strategies for businesses to overcome or avoid some of these pitfalls. By Tom Turner, CEO and President, BitSight.
1. Only the highest value business relationships have the most inherent risk
Today we see many high profile data breaches hitting the headlines. That’s because businesses are more connected than ever before, and organisations are having to deal with increasing numbers of third parties. Often, there will be a direct relationship where data is exchanged. However, we’re seeing more indirect relationships where a third party may not be deemed critical to the organisation's service or product, yet they still have the potential to introduce risk. Take the Netflix ‘Orange Is The New Black’ leak in April last year from Larson Studios. This was a post-production company that was probably thought to be a distant vendor in the supply chain, yet when they were hacked it had a massive impact on the core business.
Likewise, many businesses are using the same third party, which is often unavoidable. For some products and services, there's only one dominant player in the market to choose from if you need to outsource. This situation can result in massive downstream effects if there's a data breach, compromise, or service disruption. For example, the NotPetya malware hit many companies in Ukraine particularly hard, such as the shipping giant Maersk. This happened because a Ukrainian based software accounting platform was compromised, and the ransomware spread to its customer base.
Breaches and outages aren't just resulting from typical third parties anymore. They're also stemming from more distant vendors. While these organisations may not have access to your network, you may rely on their technology or services which could cause considerable risk downstream.
2. Your most trusted form of assurance is a diligence questionnaire
VRM programmes have traditionally focused on setting contractual obligations for vendors. Risk managers would periodically check on whether vendors were meeting certain obligations and move on to the next item on their “to do” list. For a long time, the only way to manage risk was to use questionnaires, audits, and penetration tests. This has changed, and businesses are now actively ‘hunting’ for risk. They are consuming multiple data feeds about operational, financial, and cyber security risk. In doing so, many organisations have taken a more collaborative approach with vendors, rather than a combative one. The notion that VRM is a game of strong arming between risk and legal departments is changing. Organisations and their vendors are having more constructive dialogues.
3. VRM is not a Board level issue
According to Gartner, 80% of security risk management leaders are being asked to present to senior executives on the state of their security and risk programme and 75% of Fortune 500 companies are now expected to treat VRM as a board level initiative to mitigate brand and reputation risk. Boards are beginning to request updates more than once a year and this has led to the emergence of security committees.
The challenge for risk managers is how best to contextualise the company's level of risk. This is where objective, quantitative measurement can really help. For example, being able to say that the aggregate level of cyber risk posed by vendors has dropped 20 percentage points is a lot more insightful than saying, "We've mandated that all of our vendors implement multifactor authentication.” It’s important to learn how to speak the right language to the Board.
4. Regulations and VRM programmes are two different issues
The impact of regulation very much depends on the industry sector, but if you are subject to any regulation at all, then it needs to be included in your VRM programme. Regulations that encompass all industries, such as General Data Protection Regulation (GDPR) which comes into force on 25th May this year, will need to be part of the risk management programme of every single organisation. Article 32 states that organisations that collect personal data must have rigorous due diligence processes to ensure that appropriate controls are in place before sharing data with vendors.
5. VRM can be handled manually with existing resources
Relying solely on subjective point-in-time questionnaires can leave a lot of risk unidentified or unaddressed. Many companies now understand that having a continuous objective view is needed. Also, you can’t simply just throw people at this problem. There are too many vendors connected to the enterprise and not enough risk professionals in the world to manage them. Companies need to automate processes whenever possible to manage this risk. There’s going to be a huge breakthrough when businesses across all sectors recognise the importance of automation and allow human intervention when urgent action is required.
6. Engaging with vendors and the supply chain to correct risk is difficult and confrontational
Companies have different approaches for engaging with vendors and some have more influence than others. However, we are learning that presenting data and accessing a common platform provides significant benefits.
Giving non-customers free access to a security ratings platform via a trusted partner will allow third party vendors to investigate potential network issues and allow access to remedial resources. This is a good example of how engagement with vendors can be driven by objective data. It also offers vendors a benefit in return for their engagement and reduces some of the confrontation that can accompany risk assessment.
With economies of scale at play, there are potentially long-term benefits too. With many organisations using the same vendors to rectify issues, we can reach a wider audience and the whole digital economy is better off.
From Artificial Intelligence (AI) and machine learning to IoT and blockchain, the pace of technology change is both exciting and daunting, especially for those tasked with enticing new talent to the industry. Companies have claimed for years that computer science and software engineering degrees do not deliver ‘work ready’ employees; with acceleration in technology innovation, what are the options for the developers for the future? In fact, as Alexis Shirtliff, Technical Director, DCSL Software, explains, there is no need for developers to focus on any one specific toolset or technology area; instead they need versatility, communication skills and an ability to see the big picture – all underpinned by a strong foundation in IT process and methodology.
AI or IoT or Blockchain
When a high street bank can be brought to its knees by a mis-managed IT development, the fundamental importance of technology to every business becomes painfully clear. Alongside extraordinary innovation, there is now far greater understanding of technology from businesses, as well as ever higher expectations regarding quality and app-influenced user experience.
How does this shift affect the requirements for the developers of the future? Should they be fine tuning AI expertise? Unlocking the mysteries of blockchain or understanding the opportunities of IoT? Or is this focus on bleeding edge technologies missing the point?
These three technology areas demonstrate perfectly the new mindset requirements of the developer of the future. IoT is becoming business as usual; soon it will be hard to consider a device that isn’t connected in some way. As a result, technology maturity means devices can be connected to any data source, network or cloud infrastructure with pretty much any language. There is no one specific IoT tool set or skillet; developers simply need to understand the concepts and visualise the opportunities.
AI development is increasingly based on a ‘toolset as a service’ model; a developer can plug in to a growing raft of amazing tools from IBM, Microsoft, Google et al, that enable AI style functionality. Want to embed facial recognition to support a specific development? There is no need for old fashioned coding, simply choose the right toolset and get started. The skill is, again, in picturing the possible and determining the best tool set for the job.
Blockchain, on the other hand, is less likely to stand the test of time. The role of distributed ledgers may evolve, but there appears to be little value in any future developer heading down that rabbit hole right now. As, when and if the technology gains mainstream acceptance, without doubt tools as a service will appear that enable developers to leverage blockchain as required.
Depth and Breadth
This new accessibility provides developers with unprecedented opportunities to rapidly embrace innovative technologies. There far less risk of being side-lined as a result of a dated skill set. Indeed, developers now have an extraordinary range and diversity of tool sets to support amazingly innovative solutions. Rather than specialising in any one technology, or language, developers require a different approach. It is a new mindset, rather than new technology expertise per se, that is required; an ability to be versatile and to embrace skills across the entire technology stack; from back end database integration to front end user experience.
How does this affect the way developers of the future are attracted by technology at school, prepared at university and then enticed into the industry? While there are concerns that teachers lack the technology skills and confidence required to inspire the next generation, that shouldn’t really be the constraint. How many youngsters are intuitively gaining amazing technical skills through their daily use of Minecraft, for example? The key is to understand how best to build on this interest and confidence in a way that relates to the next stage of education without forcing children down a technology specific cul de sac.
The fact is that the developers of the future will need a raft of soft skills that perhaps were not in the traditional remit – including great communication, as well as an ability to visualise possible outcomes and solutions. The skills to rapidly understand customer expectations and create a compelling response, to collaborate with a development team that could be scattered across the world and the ability to recognise which tools to apply to a specific development are now incredibly valuable. And that does not always mean the latest or most exciting: developers will always want to get a chance to use the newest and shiniest new toolkit on the block, being able to recognise when not to reinvent the wheel is also an essential skill.
Underpinning all of this, therefore, must be the fundamental discipline of good, well structured IT development. The ability to follow proven methodologies such as agile is absolutely critical in an era of incredibly high customer expectation and a demand for continuous – even daily – feedback and project update. And it is this foundation in development best practice, the ability to follow a process, that will stand the developers of the future in great stead, irrespective of technology change or innovation.
Conclusion
Technology education has always created conflict – should an individual opt for a vendor led course to attain a specific skill set, or opt for a broader, more generic education? What should companies expect from recent graduates and how much additional investment is required to make individuals work ready? While the rise in AI and blockchain, machine learning and IoT may appear to suggest that young developers need to push the boundaries, the fact is that it has never been easier to access and embrace new and innovative technologies. What is required from the developer of the future is the right mind set; the ability to balance innovation with proven solutions, the skills to communicate with clients and colleagues and the vision to create technology solutions that truly enable, not disable, a business.
It was made clear in the presentation from Gartner at April’s European Managed Services Summit that customers were finding it hard to differentiate individual managed services providers and their services. Research director Mark Paine from Gartner told a packed event that while MSP Services were growing at over 35%, it might not last, and certainly the pace of competition was not going to let up. Even so, at the latest estimate, the global managed services market is expected to grow from €150bn last year to €250bn by 2022.
“The key to a successful and differentiated business is to give customers what they want by helping the customer buy,” he told the Amsterdam audience.
In a business where 65% of the buying process is over before the buyer contacts the seller, because of all the information gathered beforehand, the MSP is even less in charge of what happens than in the old VAR model. Without differentiation, the customer is more likely to buy from their usual sources, or at the least, ignore an MSP who does not offer anything different, he says.
But the rewards are waiting for those MSPs who can prove what problems they solve and what makes them special, particularly when the MSP can show how the deal will work and how customers get value, he says. Research shows that product success and aggressive selling carry no weight with the customer, compared to laying out a vision for the customer’s own growth and success.
So a major part of the message in the Amsterdam event, and in the forthcoming London summit on September 29th will be on marketing issues with the aim of getting MSPs up to speed on the current best techniques in building sales pipelines, leveraging available marketing resources, and best practice using social media.
Among the issues challenging the MSP marketing teams is that fact that buying teams are large and extended, containing a variety of influencers, with decision-making spread throughout the organisation. Getting a consistent message through in this environment is a problem which MSPs themselves will be discussing at the London event, which will have speakers who have successfully promoted their message at the highest levels.
One secret about which more will be revealed is that MSPs need to align their businesses with their customers, so that a win for either is a win for the other. Being able to demonstrate this is a good deal-closer. Being able to lay out a convincing vision to improve the customer’s business and to offer this as a unique and critical perspective will win business, says Gartner.
Now in its eighth year, the UK Managed Services & Hosting Summit event will bring leading hardware and software vendors, hosting providers, telecommunications companies, mobile operators and web services providers involved in managed services and hosting together with Managed Service Providers (MSPs) and resellers, integrators and service providers migrating to, or developing their own managed services portfolio and sales of hosted solutions.
It is a management-level event designed to help channel organisations identify opportunities arising from the increasing demand for managed and hosted services and to develop and strengthen partnerships aimed at supporting sales. Building on the success of previous managed services and hosting events, the summit will feature a high-level conference programme exploring the impact of new business models and the changing role of information technology within modern businesses.
The UK Managed Services and Hosting Summit 2018 on 19 September 2018 will offer a unique snapshot of this fast-moving industry. MSPs, resellers and integrators wishing to attend the convention and vendors, distributors or service providers interested in sponsorship opportunities can find further information at: www.mshsummit.com
More and more businesses are appreciating the benefits that the cloud delivers in terms of scalability, flexibility, efficiency and cost. Big enterprises, for example, are now migrating mission-critical SAP workloads to Microsoft Azure, and with the growth in Azure revenue exceeding 90% for every one of the last four quarters, the faith placed in it by Microsoft CEO and cloud advocate, Satya Nadella, is paying off. By Matt Hilbert, Technical Writer, Redgate Software.
Where does this leave the database? The good news is that change is afoot for the SQL Server database world as well. The recent public preview of Azure SQL Database Managed Instance marks a significant step-change in Microsoft’s managed database service. It’s important because it elevates the simplified database-scoped programming model used in the first two flavours of Azure SQL Database, Single and Elastic Pool, to the instance level.
Managed Instance shifts the balance to the cloud by providing near 100% feature compatibility with on-premises SQL Server, yet also offering the benefits of a cloud service like built-in high availability, automated patching, dynamic scalability, and backup management with point-in-time recovery.
Organisations can therefore migrate existing on-premises SQL Server workloads to the cloud and retain the features they’re accustomed to, while also gaining many of the manageability benefits of a PaaS offering.
The promise is that an on-premises SQL Server database can be migrated by simply backing it up and restoring it to an Azure SQL Database Managed Instance. It will look and behave just as it did before, with security guaranteed because it will be isolated from other customer instances and assigned a private IP address.
Those are the headlines, but behind the friendly phrases like ‘lift and shift’ and ‘frictionless migration’ which are now being used, there are three questions that need to be answered before making the decision to migrate to Managed Instance when it moves from public preview to general availability:
1. Is your database compatible with Managed Instance?
It’s the talk about near 100% compatibility that’s the issue here. The words in large print are about supporting compatibility back to SQL Server 2008, and allowing for direct migration of database versions starting with SQL Server 2005.
The small print is slightly different because, while the on-premises and Azure versions of SQL Server are built on the same engine, there remain some variations in the features and capabilities that are supported.
Fortunately, Microsoft’s Data Migration Assistant provides a simple way to identify compatibility issues. While it doesn’t yet support Managed Instance as a migration destination, it can still be used to perform a SQL Server migration assessment. Any breaking changes, differences in behaviour, and deprecated features will be shown, together with details of the affected objects, along with any migration blockers that have been identified.
This is the perfect starting point for a migration plan because, before any real time or effort is spent weighing up the long term pros and cons of moving to Azure, the real impact on the database itself can be assessed.
2. What’s the cost of migrating to Managed Instance?
As anyone who has embraced the cloud knows, there are two immediate areas where savings can be gained by migrating from on-premises.
Firstly, it moves the CapEx cost of hardware that will need to be maintained and upgraded regularly to the OpEx cost of a service model, which helps balance the books and will please Financial Directors.
Secondly, and of particular interest to fast-growing startups or companies with fluctuating server-side demand, the CPU or storage can be increased or decreased in seconds via the Azure portal using an online slider.
That’s the upside. The direct cost savings, however, are not as large as might be expected. Given the availability, scalability, and backup advantages of PaaS, Microsoft has priced Managed Instance accordingly, and the current monthly costs are:
(These prices are for the East US region, as provided by Microsoft at the time of writing this article, June 2018)
Two things to add. There’s a 40% saving under Microsoft’s Azure Hybrid Benefit for SQL Server program which lets companies use their existing SQL Server licenses with Software Assurance to reduce the standard rates. So the 8-core pricing would fall to ~$444.39 per month.
And Microsoft itself adds the following caveat: “Managed Instance is in public preview. Prices reflect preview rates, which are typically 50% of general availability rates”.
So look at the costs carefully before making a firm commitment, and also find out what the additional costs for I/O requests and backups are, once Managed Instance moves to general availability.
3. Do your third party tools support Managed Instance?
Many SQL Server developers and Database Administrators use third party tools to help in areas like writing and sharing code, implementing version control, provisioning database copies for development and testing, and monitoring. Some go further and use the tools to integrate and align database development with application development, reducing errors and speeding up deployments.
Microsoft actively encourages it too, because having a rich ecosystem of third party vendors to enhance, ease and extend the capability of SQL Server increases its appeal.
The need for those tools exists whether databases are on-premises or migrated to Managed Instance. So if you have favourite tools, the last item on the checklist is to ensure they support Azure.
This is particularly true for mixed estates, where you might want to migrate servers with fluctuating demand to the cloud so that additional capacity can be added only when required, while keeping other databases on-premises to maximise the investment in the current infrastructure. Here, you’ll want to work in the same way as before, wherever your data is.
So if your database truly is compatible, if the costings work for you, and if you can take your favoured tools with you, the question isn’t whether you should migrate to your SQL database to the cloud through Managed Instance, it’s why you haven’t done it already.
The Ocado founders were way ahead of their time when, back in 2000, they started the UK’s only ‘pure play’ online grocery operator. Today Ocado is the world’s largest dedicated online grocery supermarket, exceeding £1 billion in annual sales. Developing the innovative software and systems that power the online retail business is Ocado Technology, a division of software developers, engineers, researchers, and scientists spread across five offices: Hatfield (UK), Wroclaw and Krakow (Poland), Sofia (Bulgaria), and Barcelona (Spain).
Ocado delivers happiness when its drivers trudge through the snow delivering Christmas orders; indeed, this going-the-extra-mile attitude is what keeps its 600,000 customers coming back for more. But when you consider the Ocado motto ‘shopping made easy’, there is nothing easy about delivering on that promise. In a mind-bogglingly complex process, a typical Ocado warehouse packs and ships more than one million items each day, supported through a technology platform built in house. Additionally, the Ocado family now also includes Fetch (a specialist online pet store) and Sizzle (homeware), both firmly based on the technology platform built by Ocado Technology. Recent e-commerce sites such as Morrisons and Fabled, a beauty destination site in partnership with Marie Claire, are more recent success stories for Ocado Technology.
In parallel, Ocado Technology is also developing the Ocado Smart Platform (OSP), an end-to-end e-commerce solution designed to put other brick-and-mortar retailers around the world online. From its user-friendly mobile apps and webshop to highly efficient automated warehouse technology, OSP offers superior customer experience and a highly efficient end-to-end supply chain solution.
Ocado now offers OSP as a managed service capability to partners internationally, enabling them to build sustainable, scalable, and profitable online grocery businesses in their own markets.
Cloud move demands increased visibility
In 2014 Ocado found itself in a transitional period. Demands on its business were increasing and Ocado felt that a move to a cloud infrastructure would give the company more flexibility and scalability. Amazon Web Services (AWS) was chosen as the target environment.
Peter Thomas, head of E-Commerce OSP at Ocado Technology, shares his insight: ‘One of the challenges was that the dashboards we had developed to monitor the various sites and the platform behind it were not compatible with AWS and needed to be replaced. I had used New Relic technology before and wondered if that might be our answer.’
New Relic’s Digital Intelligence Platform was introduced to support the cloud migration, including integrating well over 100 microservice applications into the OSP platform. New Relic acted very much as a ‘security blanket,’ as Thomas termed it: ‘New Relic APM provided the extra visibility into our systems and platforms, which we needed. It gave us confidence that our migration was going as expected, and, if not, we were in a very good position to address any issues early. We learned lots about cloud architecture, and it was great having New Relic there to reassure us nothing was quietly going wrong behind the scenes.’
New Relic became more pervasive within Ocado and is now often used to pinpoint the root cause of any performance issues. A notable example was when one of the OSP backend systems showed a gradual performance degradation that was in danger of impacting customer orders. Customer satisfaction is vitally important for Ocado, and a lot of effort was spent on finding out what was wrong. Confusingly, different systems that were seemingly unconnected were affected. It wasn’t until an engineer used New Relic Insights to represent the problem that it became clear that although the database systems were all different, they were hosted on the same physical machine, which was the root cause of the problem.
‘New Relic made it easy for us to visualise and track this issue in real time’, says Thomas. ‘It helped resolve this particular problem a lot faster than we could otherwise have done.’
Better collaboration for a better customer experience
Ocado Technology works with a globally distributed team of 1,000 developers, all working on different parts of the estate simultaneously. New Relic is used to provide a central overview and a common communication tool. Thomas loves that the features to enable this effective team collaboration come straight out of the box with New Relic: ‘We’ve tried other products which needed a lot of configuration. As soon as you start customising the solution with so many teams, it becomes very difficult to manage and maintain. New Relic did an excellent job at giving us some very sensible defaults which we could easily build on.’
It wasn’t just the internal collaboration that made New Relic a success within Ocado; the New Relic team itself took an active interest in how Ocado used the toolset. A centre of excellence was introduced, consisting of New Relic and Ocado experts. As a team, they discuss the New Relic road map and what other features could prove useful. Having that deep insight into how OSP operates has enabled this team to really add value to Ocado.
New Relic visualisation supports development effort
When Ocado rebuilt the frontend of some of its non-food sites to become more customer-responsive, New Relic Browser proved invaluable to the experience. ‘The general trend we’re seeing is that code is moving from the backend to the frontend’, says Thomas. ‘Most other solutions don’t seem to take this into account, but New Relic Browser enables us to see what’s actually happening within our customers’ browsers. It gives us confidence that we are building solutions which work for our customers in the real world, and that if they don’t, we’ll know about it straightaway. You can’t get this kind of insight just from looking at your server logs.’
The combined data from New Relic APM and Browser has been used to highlight performance issues, and identify where problems originate so they can be addressed.
New Relic Insights, meanwhile, delivered solid ROI in one of Ocado’s warehouses. These highly automated customer fulfilment centres pack and ship more than one million items a day from a stock of over 50,000 different grocery items. A part of the warehouse suffered a broken sensor that was reporting an exception, even when this wasn’t accurate. The systems are event-driven so without counting the frequency of these exceptions Ocado had no way of identifying this issue. Through the power of New Relic Insights, alerts, and visualisation, the issue was quickly identified and passed over to engineering for fixing. The storage capacity, which was lost due to the error, is worth £35,000 per year in real labour costs. It showed Ocado the potential for making incremental gains through the New Relic Digital Intelligence Platform. Overall, improving automation and monitoring capabilities with New Relic has realised a savings of £100,000 per year.
Thomas concludes: ‘The world is changing fast and the relentless growth of online shopping is powered by improved technology, faster broadband, and better mobile devices. With the New Relic Digital Intelligence Platform, we feel prepared to take on these challenges and deliver on our motto “shopping made easy” to provide the superior customer experience we are best known for.’
Progressive business leaders understand the need to establish operational strategies that integrate mobile device analytics allowing data-driven decisions to be made - allowing for improved performance, security and cost control.
Setting free your mobile data
Essentially, any solution that addresses mobile network analysis, seeks to collect and organise real-time data on mobile deployments; this will typically include things such as performance, security, connectivity and behaviour. What makes this useful for any business is that the data collected allows for the tuning of device policies and workflows allowing for increased employee productivity, compliancy and ROI.
Another way to address an organisation’s unique needs is to build customised alerts, queries and dashboards. In this way you could, for example, track data usage and performance issues in real-time and be notified of changes in data usage patterns which could be symptoms of a data surplus or perhaps, a security breach.
Organisations can begin to make data-driven decisions around mobile deployments using mobile device analytics. This can be done in a number of different ways, including:
Why is a mobile data strategy important?
The competition for bandwidth between applications and mobile endpoints is currently multiplying exponentially.
Organisations need to analyse real-time performance and proactively manage employee workflows and behaviour in order to maximise the return on their mobile investments - this is in the face of the high cost per gigabyte of wireless data combined with premium-priced mobile devices.
As large organisations continue spend significant portions of their annual IT budgets to mobile initiatives (hardware, applications, services or network infrastructure) then the ability to analyse how these mobile assets perform in real-time will be fundamental. Mobility needs to be optimised as we see operational intelligence markets converging. Mobility in general is outgrowing the need to be simply managed.
Solutions that provide real-time mobile device analytics allow business leaders crucial visibility into their large technology investments. In turn, this insight can lead to a positive impact to the bottom line - an important consideration for any organisation.
From automobile manufacturers to oil and gas companies, businesses across the globe are placing big bets on “industrial IoT” to derive real business value from outcomes like predicting equipment failures, avoiding accidents, improving diagnostics, and more. Unlike consumer IoT where the volume of data generated by each device is typically low, industrial IoT sources create a significant amount of data. Unfortunately, many of these deployments are hamstrung by technical constraints and other limitations that prevent businesses from maximising their IoT investment returns.
Many of the use cases require the collection of sensor data from edge devices that is sent over a network connection to a centralised application for analysis before an action is carried out; often back at the edge. This classic input, process and output methodology is well understood, but any IoT environment can be a data management challenge because of the huge volumes of data that are created and the latencies inherent in having global distribution. However, the case is not cut and dry and, in some instances, processing data close to the source is a better option.
Bigger IoT data
The challenges of aggregating data from consumer-oriented devices, like wearable technologies and smart thermostats, are well understood. For those types of devices, the volume of data is due to the large number of devices, and each individual device doesn’t necessarily create much data. However, there are a new set of challenges for IoT devices that generate megabytes or gigabytes of data per second. For example, real time analysis of audio, video and other rich sensor data are all areas where the incoming streams could overwhelm traditional data storage architectures. Vehicles, medical devices, and oil rigs are perfect examples of sources of data that need a much more powerful architecture than those needed by consumer-oriented devices. And as these IoT data streams reach the centralised clouds for processing it will increasingly be Artificial Intelligence and Machine Learning that will help to find insights and generate the subsequent actions.
Healthcare example
However, talking in the abstract when it comes to IoT is difficult as each use case will have different drivers and requirements. Instead, let’s look at a few concrete examples as a proxy for the types of challenges that are involved. The early detection and treatment of chronic diseases—such as heart disease can save lives and reduce the cost of healthcare. Two of the biggest issues are coordination of care and preventing hospital admissions for people with chronic conditions. Several trials are using cheaper sensors that can monitor patients’ vital signs and send this data along with Electrocardiogram (ECG) readings over cellular networks as a regular stream to applications in the cloud. These diagnostic and monitoring applications analyse each patients’ vitals and ECG readings while considering historical data from medical records. The flow of data into the system include real-time streams, historical data, patient data and benchmark data created by aggregating huge volumes of previous scans from other patients.
In this example, like many others within the IoT landscape, the clinicians require a workflow that collect data, aggregate, and learn across a whole population of devices to understand events and situations. In this scenario, the detection of an anomaly such as over medication or warning signs of an imminent cardiac event may require more intelligence at the edge so they can react to those events very quickly. The researchers have built a platform that uses common elements to process both stream and batch data within a common data fabric that can help handle all the data in the same way, control access to the data, and apply intelligence in a high performance and scalable fashion.
More at the edge
However, in some use cases, sending all the data back to the cloud for analysis is impractical or inefficient. Increasing, a growing requirement of the industrial IoT is to have computing power available close to the data sources. Take for example, oil production. With crude oil prices at historic lows, oil and gas companies are actively looking for ways to cut production costs and streamline their operations.
These companies are investing heavily in technology, including IoT, to improve their bottom line. For example, sensors attached to various parts of oil rigs are being used to better predict oil production and to position drills for maximum output. Yet seismic data and oil rig sensors generate large quantities of data, with some estimating close to 1.5 TB (from 19 million readings) per day. Oil rigs are often in remote locations with limited bandwidth, preventing global aggregation and analysis of the data. An example of this includes the work carried out by Royal Dutch Shell that realised a $1 million return on an $87,000 investment in digital technology to monitor oilfields in some of Nigeria’s toughest terrain.
Privacy concerns
The choice to process at the edge or bring into the core should also be tempered by privacy or compliance requirements—such as those driven by data residency regulations that might dictate that some data needs to stay at the IoT edge and not be copied to other locations for further processing. Enforcement of these policies is problematic, since there is often no good way of differentiating between data that needs to stay put and data that can be moved around. In other cases, data can and should be processed at a more central cluster with bigger compute resources, where deeper analytics can be performed on data coming in from many different edge devices. Consider this situation in the context of telemedicine, for example. While the output of medical devices at one edge can be used to achieve some basic diagnosis, more accurate diagnoses can be achieved by analysing output from many such medical devices—potentially spread throughout the world. So, it would be important to aggregate and analyse this output in a central, more powerful cluster and return the results immediately back to the medical centre.
Convergence and fabrics
Healthcare researchers and petrochemical firms are just two groups that are examining new ways to build the next-gen apps. At the heart of these projects are several common technologies including cloud-scale data store to powerful database and integrated persistent streaming to create new possibilities for enterprise developers looking to architect, develop and deploy applications that were impossible until now. The combination of these elements is often called a converged data platform and is starting to be adopted across a wider range of IoT use cases. These platforms provide benefits including the creation of a high IOPS, low latency data fabric for high performance computing apps. Another advantage is in real-time analytics scenarios where a data fabric can simultaneously ingest, store, analyse, process, and decide, without making copies of data. As IoT data moves from the edge to the cloud and back again, organisations will need to forget the monolithic architectures of the past and consider convergence as the starting point to deliver the scale needed for innovative new use cases.
Businesses are demanding more from their IT departments — more innovation, more change, greater agility and speed — and a programme of migration to the cloud is the enabler.
Yet upscaling to this extent requires a change in mindset, and not only for the IT function. Mass migration will have an impact on the whole organisation. So if your IT department doesn’t want to be left behind in the race to digital transformation, then make sure you take these five critical business groups with you on your journey.
1. Finance
Migration to the cloud will have a big impact on finance. This isn’t just in relation spending on IT, but the financial model that is used across the wider business and the CapEx to OpEx implications that come as a result of a ‘consume as you go’ approach.
CFOs will have the option to attribute any consumption in the cloud directly to a line of business, rather than putting ‘IT services’ against an overhead line. Now the infrastructure can be related directly to the business application, giving much more visibility into the costs of running that product or department.
What’s new for Chief Financial Officers?: It’s a much less predictable purchase process that must take into account the value of the investment to the business as a whole.
2. Procurement
The cloud presents procurement with the challenge of completely re-thinking an established way of working. Used to dealing in supply chain procurement buys at a massive scale, with discounts front of mind — and infrastructure is no different, typically buying everything upfront for the next five (or more) years.
But the rules of engagement for procurement change for a cloud migration. The focus is on having the flexibility to buy only what is needed, when it’s needed, under a master agreement. The key is to help procurement understand that ambiguity is a good thing during the initial cost setting phase as the risk is much smaller than in physical infrastructure, as you’re optimising and fine-tuning continuously.
The procurement team is invaluable to you as there will likely be current licensing in place for legacy applications where renegotiation will be required. Work alongside your procurement team and allow them to play to their strengths of negotiation and generating value.
What’s new for Chief Procurement Officers? With the cloud, you’re not bulk buying at a discount, the real savings come from re-architecting how you work with public cloud providers
3. Risk and compliance
Under a cloud model, business continuity and disaster recovery (DR) change massively. You move into a world where DR is configured as exact copies via scripts and sits idly, paying a minimal amount until your organisation needs that capacity.
It also changes the business dynamic on security and removes a big compliance issue. Hyperscale providers are highly certified and invest heavily in security in a way that few organisations could with their own infrastructure. Risk mitigation is built into the cloud offering, with continuous refreshing of patches and underlying hardware, high-level encryption, with auditing carried out at source. The job for the CIO is to focus on how the building blocks of a mass migration to the cloud are put together so that any configuration does not leave security holes.
What’s new for Compliance? The cloud means you need to move from a ‘test’ to trust mind-set which can require a huge cultural shift.
4. Corporate governance
The shift from CapEx to OpEx that comes with the cloud is one that needs to be controlled. For CapEx outlays, controls are seldom required after purchase, but for cloud-based services costs can fluctuate and there’s a risk these will nudge up over time if they are not monitored or continuously optimised.
It’s a risk that needs to be managed at a corporate level, as it’s difficult to forecast accurately without the right governance. Outcomes need to be monitored against the original intent and optimised as you go.
Your public cloud consumption is governed by a master agreement and legal should ensure this is adequate upfront. However, if additional internal governance rules to use cloud are too strict and there is an approval route for every element of a project, this will inhibit the ability for the IT team to innovate quickly. IT needs to work with corporate governance teams to strike the right balance; delegating authority and monitoring effects to balance control with innovation.
What’s new for corporate governance officers? Ensuring compliance with cloud procurement framework requirements (such as G-Cloud for public sector organisations) and building flexibility so you can balance control with innovation.
5. HR
Under a cloud mass migration, the traditional roles within IT will change. Thanks to automation, some roles, such as testing, will reduce and there’s a requirement for far greater commercial awareness. Everything you ‘scale up’ suddenly has a cost attached to it and technicians can lose sight of this.
People will need to reskill to understand the services in the cloud and how to manipulate them. For many this reskilling creates the opportunity to grow into a new role. Yet it is culturally that mass migration will force a significant change, particularly in organisations that are particularly bureaucratic. Working with HR will be vital to manage such a cultural shift for established teams, communicating what this means for the department, promoting the opportunities it offers and ensuring a supportive and enjoyable working environment.
What’s new for the Chief People Officer? The provision of training will be key. Not just in cloud architecture but also in management skills. While it is tempting to play down the changes required via internal communications, get advice from your cloud or integration partners on how upskilling has worked in other companies. If they’re worth their salt, they should be able to provide the training. As with anything important, success does happen in a vacuum; efficient collaboration is vital and moving your business en masse to the cloud is no different. My advice to CIOs is to be mindful of the priorities and pressures of your peers in other functions.
The cloud gives the CIO the opportunity to create a new operating model for their part of the business and beyond. It requires an agile way of working from those who may be new to the concept. Not everyone will feel empowered to embrace it so the sooner you can educate them on what’s required from their teams, the sooner you can all begin heading in the same direction, in the same way.
The results of his experiment betrayed a fundamentally broken system. The authors were only successful in 39 of their 100 attempts. In other words, 61 per cent of the published scientific literature could not be replicated. To the research community, these results didn’t come as much surprise.
Perhaps the most serious – and certainly the most infamous – case of falsified research was the MMR vaccine controversy: a study in The Lancet linking the combined measles, mumps, and rubella (MMR) vaccine to colitis and autism spectrum disorders. While the paper was eventually retracted with the revelation of multiple undeclared conflicts of interest as well as manipulated evidence, the damage had already been done; its quackery had been widely reported and had already reached the minds of the general public. A small but noteworthy subsection of society continues to cling to it as truth.
Admittedly, this is an extreme example, but it demonstrates a valuable lesson: the scientific ecosystem is extraordinarily slow to redress errors. Flawed literature can and does slip through the editorial and peer review process. By the time bad papers are disproved, they’ve already metastasised into new research, causing irreparable damage. The world’s ten most ‘popular’ retracted papers have been cited on over 7,500 occasions, and that’s a conservative estimate. Research, including bad research, spreads like wildfire.
The phenomenon is partly inherent to science itself. The scientific ecosystem is fragile – built painstakingly upon the results of older science. It’s a wholly a posteriori network. Citations proliferate. That level of interdependence is eminently useful, but also makes it vulnerable. People can’t cross check each and every hypothesis that leads to their own hypothesis because of the time-pressure to publish.
But science’s inter-reliant makeup is also an opportunity. It’s a system ripe for disruption - highly suitable for a technology that has only been around for about a decade: blockchain.
In simple terms, blockchain is a decentralised database which is visible to everyone. It has the potential to reshape various business models in the majority of industry verticals and is most commonly referenced in relation to financial services and the supply chain space. But it’s academic research where blockchain has some of its greatest potential, specifically as a solution to the problem of trust.
Scientific knowledge is arguably the ultimate decentralised system, particularly as we have transitioned from analogue into digital. It isn’t controlled by a central authority, and, by its very nature, it demands public scrutiny and constant challenge.
But things don’t work that way the moment. Today, getting a study published rests firmly on the peer-review process. A handful of experts will quickly read a study, offer advice, and recommend whether it should be published. And as poor reproducibility statistics have gone to show, that’s a highly fallible process. Reviewers are under considerable time pressure. Reliability is difficult to gauge. The problem of bias is intractable.
With blockchain, every aspect of that process could be made transparent, opening up the possibility of genuine public scrutiny. Since blockchain is, at its core, an immutable ledger, it would offer users a rich, open paper trail of reviews, arguments, and hypotheses around certain scientific problems. With papers side-by-side on databases, and with the assistance of artificial intelligence, reliability becomes easier to establish. This could all instil much greater trust in the system.
Blockchain also provides a useful way to validate knowledge dynamically, subsequent to publication. Science is mercurial: new information constantly arises casting doubt on older items of research, but existing publishing practices lack the proper tools to accommodate these changes in literature - retractions, replications, new findings, and so on. Blockchain, with its innate traceability, would enable the peer review process to become more of a continuous, open process.
Finally, blockchain offers the potential of building entirely new economic models through issuing tokens, or digital currency, tied to the value of what the community is building: a currency tied to the most precious thing we humans possess: knowledge. With the incentives of tokenized economies frequently found in blockchain models, accurate peer reviewing, accurate research papers, and many more vital activities to maintain the community’s knowledge could all be rewarded. The influence of human bias would abate through the sheer volume of users and reviewers, and the cancellation of these biases in aggregation.
An open, scalable, decentralised platform, backgrounded by blockchain, thus offers an optimal way to fix distortions that beget inaccuracies and fuel distrust in scientific knowledge generation and dissemination worldwide. A community-run engine capable of checking the underlying factual base of a given input text provides us with a unique opportunity to unbias our entire knowledge base through a new prism built with the highest transparency and accountability standards.
By authenticating and certifying published research data using the blockchain, the scientific community could reduce errors and regain the public's confidence - promoting reliable research, and sorting fact from fiction. And this is, fundamentally, what science is all about.
We are now on the brink of another similar transformative movement in transport: this being a move away from vehicles driven by humans, to vehicles which drive themselves. As with the invention and inevitable rise of automobiles, it’s sure to have a revolutionary impact.
Mass production of autonomous vehicles would affect unprecedented social, economic and environmental change: the flow of goods and people would become safer, more organised, and cleaner. The cost of transporting freight would hit rock bottom. Independence and freedom of personal travel would become available to all. Sensor technology would make accidents infrequent. Traffic jams would become a thing of the past.
Undeniably, self-driving vehicles offer us a promising future, but there are lots of questions to consider before we reach that point; the road to autonomous vehicles may be a bumpy ride, and not least from a transactional and commercial perspective.
For one thing, if you are hiring a self-driving car, you’re not hiring the driver – you are actually hiring the car. So how do you pay for transport? This is not an easy question to answer.
How do you get a quote for a transaction with a car, or with a drone? How do you buy the services of a car that collects you and drops you off, all by itself? How do vehicles compete for your business? With humans partially removed from the equation, suddenly you’re not paying for labour; you’re paying to hire and make use of an asset.
Things become even more complex if we take autonomous vehicles to their logical conclusion. Imagine a world where transportation systems are almost entirely automated – where autonomous drones utilise the services of other more specialised drones, or where self-driving lorries and trucks communicate with drones to complete deliveries. How do autonomous vehicles function in an entirely self-sufficient ecosystem – that is, one in which autonomous vehicles almost exclusively make use of other autonomous vehicles?
For the most part, the answer lies in successful communication. As General Motors CEO Mary Barra explains, “The key with autonomous is the whole ecosystem. One of the keys to having truly fully autonomous is vehicles talking to each other.”
The problem is, as things currently stand, there is no infrastructure in place for such a system.
Amazon may have Prime Air drones, Domino’s may already have pizza delivery robots and Waymo may be trialling autonomous taxis, but currently, there isn’t any connective tissue linking them. All these companies are building their solutions using the same closed platform model. And while autonomous drone and robotics companies are emerging, their networks are equally proprietary, closed and non-inclusive.
That doesn’t look likely to change any time soon. The incentives simply don’t exist. Large corporations with stakes in autonomous vehicles will naturally focus on dominating their own markets, rather than investing in tech that enables communication with competitors. Needless to say, that’s a recipe for significant inefficiencies down the line.
Here’s how blockchain can be the key.
First and foremost, blockchain could solve the communication problem at its most basic – the fact that autonomous vehicles have no way of recognising other autonomous vehicles. Different entities from different manufacturers need to be able to seek each other out, and a common communication protocol powered by blockchain would allow exactly this – enabling vehicles to discover each other, as well as service providers, and clients around them.
By using smart contracts, essentially, digital contracts, blockchain could also facilitate significantly more complex forms of communication. Multi-party contracts could exist between buyers, sellers, and when needed, between insurers and others. Intra-mission communication between vehicles could be recorded on the blockchain, allowing trustless collaboration between vehicles that would otherwise be incompatible.
Take this scenario, for example. A truck from Company A unloads with the assistance of drones from Company B. The two parties sign a contract which is immediately followed by a release of funds from the Company A to Company B, once the contract is satisfied. Company A’s truck then refuels at a gas station of Company C. Again, funds are spontaneously exchanged to Company C once the tank is full. The twist, of course, is that this all of this happens without any human intervention.
Blockchain offers something that no other technology can: a truly connected world where any autonomous vehicle could operate in any environment, consuming services around it as the need arises. True, artificial intelligence may have driven the automation revolution so far, but it’s blockchain technology that will be the ignition that will make it all work, together.
Digitisation Falters
Pinpointing the reason for organisations’ growing failure to make the expected progress towards successful digitisation is a challenge. Choice fatigue, given the diversity of innovative technologies? Over ambitious projects? An insistence by some IT vendors that digitisation demands high cost, high risk rip and replace strategies? In many ways, each of these issues is playing a role; but they are the symptoms not the cause. The underpinning reason for the stuttering progress towards effective digitisation is that the outcomes being pursued are simply not aligned with the core purposes of the business.
Siloed, vertically focused digitisation developments typically focus on short-term efficiency and process improvements. They are often isolated, which means as and when challenges arise, it is a simple management decision to call time on the development: why persist with a digitisation project that promised a marginal gain in process efficiency at best, if it fails to address core business outcomes such as customer experience?
Accelerating the digitisation of an organisation requires a different approach and brave new thinking. While disruptive projects and strategies can prove threatening to existing business models – when executed correctly – can in fact create opportunity for new business models, exploration and a new approach to the market. By considering and focusing on the core aspects of the business, not only can opportunities to drive down cost be identified, but also deliver measurable value in line with clearly defined outcomes.
Reconsidering Digitisation
In many ways the IT industry is complicit in this situation: on one hand offering the temptation of cutting-edge and compelling new technology, from robots to augmented reality, and on the other insisting that digitisation requires multi-million pound investments, complete technology overhaul and massive disruption to day-to-day business. It is therefore obviously challenging for organisations to create viable, deliverable long-term digitisation strategies; and this confusion will continue if organisations focus on the novelty element and fail to move away from single, process led goals.
Achieving the true potential digitisation offers will demand cross-organisational rigour that focuses on the business’ primary objectives. Without this rigour and outcome led focus, organisations will not only persist in pointless digitisation projects that fail to add up to a consistent strategy but, concerningly, will also miss the opportunity to leverage existing infrastructure to drive considerable value.
Consider the impact of an IoT layer deployed across refrigeration assets throughout the supply chain to monitor and manage temperature. A process based approach would be focused on improving efficiency and the project may look to utilise rapid access to refrigeration monitors and controls, in tandem with energy tariffs, to reduce energy consumption and cost. However, if such a project is only defined by this single, energy reduction goal, once the initial cost benefits have been achieved there is a risk that the lack of ongoing benefits will resonate with management. Yet digitisation of the cold chain also has a fundamental impact on multiple corporate outcomes, from customer experience to increasing basket size and reducing wastage; it is – or should be – about far more than incremental energy cost reduction.
Supporting Multiple Business Outcomes
Incorrect cooling can have a devastating impact on food quality. From watery yogurt to sliced meat packages containing pools of moisture and browning bagged salad, the result is hardly an engaging brand experience. These off-putting appearances can threaten not only customer perception but also basket size, yet the acceptance of this inefficiency is evident in the excessive supply chain over-compensation. To ensure that the products presented to customers on the shelves are aesthetically appealing, retailers globally rely on overstocking with a view to disposing any poorly presented items. The result is unnecessary overproduction by producers and a considerable contribution to the billions of pounds of food wasted every year throughout the supply chain.
Where does this supply chain strategy leave the brand equity with regards to energy consumption, environmental commitment and minimising waste? Or, for that matter, the key outcomes of improving customer experience, increasing sales and reducing stock? It is by considering the digitisation of the cold chain with an outcomes based approach, a project that embraces not only energy cost reduction but also customer experience, food quality, minimising wastage and supporting the environment, that organisations are able to grasp the full significance, relevance and corporate value.
Furthermore, this is a development that builds on an existing and standard component of the legacy infrastructure. It is a project that can overlay digitisation to drive value from an essentially dull aspect of core retail processes and one that can deliver return on investment, whilst also improving the customer experience.
Reinvigorating Digitisation Strategies
If digitisation is to evolve from point deployments of mixed success, towards an enduring, strategic realisation, two essential changes are required. Firstly, organisations need to consider what can be done with the existing infrastructure to drive value. How, for example, can digitisation be overlaid onto existing control systems to optimise, for example, the way car park lights are turned on and off, to better meet environmental brand equity and reduce costs? In the face of bright, shiny disruptive technologies, it is too easy to overlook this essential aspect of digitisation: the chance to breathe new life and value into existing infrastructure.
Secondly, companies need to determine how to align digitisation possibilities not with single process goals but with broad business outcomes – from a better understanding of macro-economic impacts, all the way back through the supply chain to the farmer to battle the global food crisis, to assessing the impact on the customer experience. And that requires collaboration across the organisation. By involving multiple stakeholders and teams, from energy efficiency and customer experience to waste management, a business not only gains a far stronger business case but a far broader commitment to realising the project.
Combining engaged, cross-functional teams with an emphasis on leveraging legacy infrastructure offers multiple business wins. It enables significant and rapid change without disruption; in many cases digitisation can be added to existing systems and rapidly deployed at a fraction of the cost proposed by rip and replace alternatives. Using proven technologies drives down the risk and increases the chances of delivering quick return on investment, releasing money that can be reinvested in further digital strategies. Critically, with an outcome-led approach, digitisation gains the corporate credibility required to further boost investment and create a robust, consistent and sustainable cross-business strategy.
The concept of DevOps dates back nearly 10 years now. During this time, a lot has changed. As DevOps has matured, we have seen many successful implementations, lessons learnt and copious amounts of data gathered. One thing that remains unchanged to this day – DevOps is motivated by business results, without which there would be no reason to take risks. Typically, organisations are driven to improvement through one or more of the following four areas: time to market, quality in the improved user experience, efficiency or compliance. However, to achieve any of these objectives, DevOps requires changes to culture, process and tools.
As consumers have demanded more online services, DevOps has become incremental to the digital transformation some companies have embarked on to stay competitive. For example, during the time that Netflix evolved its business model from DVD rentals to producing its own shows and delivering a video on demand service, a lack of commercial tools to support its huge cloud infrastructure made it turn to open source solutions for help. It was here that the Simian Army was created, a suite of tools that stress-tested Netflix’s infrastructure so that its IT team could proactively identify and resolve vulnerabilities before users were affected.
A global study from Freeform Dynamics and CA Technologies reveals the benefits and drivers for DevOps implementation, and highlights how culture, process and technology must come together to enable DevOps success. In EMEA, IT Decision Makers surveyed saw a 129% improvement in overall software delivery performance when practicing cloud and DevOps together. This was over an improvement of just 81% when practicing DevOps alone and 67% when leveraging cloud without DevOps.
Not only have organisations experienced 99% better predictability of software performance, by combining DevOps with cloud-based tools, it has resulted in a 108% improvement in customer experience over traditional software development and delivery models. A streamlined online customer experience is highly in demand, and respondents cited a faster software delivery speed of 2.6 times, plus more than 3 times better cost control for the tools and services that DevOps teams actually use.
It is clear that modern development and delivery must be supported with DevOps. The following five components are essential in enabling companies to leverage new software to meet customer needs in any deployment:
In the current environment, being built to succeed means being built to change. Innovations supporting microservices and container-based architecture are driving overall modernisation with technologies such as machine learning and advanced analytics. Today, traditional software development proves obsolete versus cloud, DevOps, or – ideally – both combined. Together, cloud and DevOps are fuelling the modern software factory revolution.
Traditional maintenance methods are proving to be outdated. As we’ve learnt already, unexpected breakdowns in industrial organisation, such as power plants, computing and manufacturing, can be very costly. Newer technologies, like machine learning, can be the solution to revolutionizing how industries keep going and avoid major crashes in the future.
Prevention not maintenance is the solution
The role Asset Performance Management (APM) plays in maintaining industry is evolving fast, driven by the catalyst of low-touch machine learning. PWC predicts that, in the next five years, manufacturer’s adoption of machine learning and analytics to improve predictive maintenance will increase by 38%.
We are already seeing a widespread integration of machine learning in APM marks a transition from estimated engineering and statistical models towards measuring asset behaviour patterns. Manufacturing facilities staff can now readily extract value from existing design and operations data to optimise asset performance.
We have not reached this stage overnight. Engineers have been applying performance models for years – but it is only in the last ten, with the emergence of the industrial internet of things (IIoT) and growing demand for smart devices and consumer-style applications at work, that modern state-of-the-art APM methodologies have really come into their own. Around 2007-2010, industrial technology began to update its offerings, with user interfaces incorporating low-touch applications and displays. At the same time, cross-industry initiatives resulted in open standards for connecting disparate systems and work process inter-operations.
Today, there’s a growing realisation that maintenance alone cannot solve the unexpected asset breakdown problem. Market-leading companies understand that they have gone as far as possible with traditional preventative maintenance techniques. Predictive maintenance represents the next frontier.
Era of Low-Touch Machine Learning
Data-intensive and complex environments in manufacturing industries are prime candidates to deploy the new advances in reliability management. Deployed coherently, with appropriate automation, machine learning enables greater agility and flexibility to incorporate current, historical and projected conditions from process sensors, as well as mechanical and process events. Systems become automatic and agile, flexible models emerge that learn and adapt to real data conditions, and incorporate all the nuances of real asset behaviour.
Data capacities and computational capabilities are so great that internal staff can now perform active, accurate management of individual processes and mechanical assets. This management capability can now be applied to combinations of assets — plant-wide, system-wide or across multiple locations.
This pivot in APM’s capabilities is timely for these industries. Process manufacturers are under economic pressure, and razor-thin operational margins are pushing process industry executives to look to APM for additional return on investment.
With that in mind, here are five machine learning best practices that drive state-of-the-art reliability management, across multiple process industries.
1. Data Collection and Preparation
Over the last two decades, every attempt at massive data analysis from diverse sources of plant data collected from sensors has run into issues around collection, timeliness, validation, cleansing, normalisation, synchronisation and structure issues.
Often such data preparation can consume 50–80 percent of the time to execute and repeat data mining and analysis. However, that process is essential to ensure appropriate and accurate data that allow end-users to trust the analytical results. New advances in APM have automated most of the data preparation process to assure trust and reveal previously undiscovered opportunities with minimal user preparation.
2. Condition-Based Monitoring
Once data is trustworthy, condition-based monitoring (CBM) can be applied. The plant conditions vary constantly, according to mechanical performance of assets, feedstock variations in quality, weather conditions and production timeline and demand changes. Static models cannot work under such duress. In addition, focusing CBM on mechanical equipment behaviour can reveal only a small fraction of the true issues causing degradation and failure.1
To address the shortcomings of legacy CBM, new advances in APM deliver comprehensive monitoring of the mechanical and upstream and downstream process conditions that can lead to failure.
3. Work Management History
Problem identification, coding and a standard approach of problem resolution provide an important baseline for the exact failure point in the lifecycle of an asset. OEM data that may live in a big data solution can provide insight into process issues and outliers specific to the configuration and engineering within the plant process.
4. Predictive and Prescriptive Analytics
Clean data and CBM enable in-place predictive analytics: a process to interpret past behaviour and predict future outcomes. In contrast, using engineering and statistical models to estimate the future readings of sensors, and interpret variances from actual readings, is an error-prone technique. Top performers use inline, real-time analysis of the patterns of normal and failure behaviours of process equipment and machines.
When performed correctly, predictive analytics can accurately portray asset lifecycle and asset reliability and focus on the early root cause of degradation. The insights available provide accurate, critical lead times. This allows time for decisions that can eliminate damage and maintenance or, at least, provide preparation time to reduce the time-to-repair and mitigate the consequences.
Best-in-class APM provides prescriptive advice based on root cause analysis and presents information on the approach that will proactively avoid process conditions that cause damage, and/or advise on the precise maintenance required.
5. Pool and Fleet Analytics
The next level of analytics allows patterns discovered on one asset in a pool or fleet to be shared, enabling the same safety and shutdown protection for all equipment. Once deployed, companies can rapidly scale solutions from a unit to multiple sites. From all local systems, information roll-up from disparate sites into one larger model provides asset performance comparisons across sites and plants, creating common baselines that highlight areas for improvement.
Conclusion
The risk of failure, as a result of asset degradation, has never been higher. Not only does unplanned downtime come with a significant monetary losses but it can also affect business reputation and customer loyalty. New technologies that address these issues changing today’s manufacturing sector. By investing in low-touch machine learning capabilities means building on previous maintenance practices in order to recognise small issues before they become big problems and improve operational integrity overall.
1 Webinar “Improve Reliability of Process Assets with Prescriptive Analytics: Get Results Today featuring ARC Advisory Group,” 14 June, 2017.
Until recently, the Gi-LAN connecting the EPC (Evolved Packet Core) to the internet was considered to be the most vulnerable part of the service provider network and was protected via Gi-Firewal and anti DDoS systems. The rest of the EPC links were considered difficult targets for hackers because advanced vendor-specific knowledge was required for a successful attack. Since the typical hacker prefers a soft target, defensive measures weren’t a priority for developers or carriers. Network complexity was a defence in itself.
However, the requisite know-how to attack EPC from other interfaces is now becoming much more common. The mobile endpoints are being infected at an alarming rate, and this means that attacks can come in from the inside of the network. The year 2016 saw a leap in malware attacks, including headline-makers Gooligan, Pegasus, and Viking Horde. Then the first quarter of 2017 saw a leap in mobile ransomware attacks, which grew by 250 percent.
The need for securing the EPC is tied to advances like LTE adoption and the rise of IoT, which are still gaining speed. LTE networks grew to 647 commercial networks in 2017, with another 700 expected to launch this year. With the adoption of LTE, IoT has become a reality—and a significant revenue stream for enterprises, creating a market expected to reach £400 billion by 2022. The time to take a holistic approach to securing the service provider networks has arrived.
There are three primary data paths connecting mobile service providers to the outside world. The first of these is a link to the internet through S/Gi LAN. Next is a link to a partner network that serves roaming users. Last, there is a link for traffic coming from towers. The security challenges and the attack vectors are different on each link. Until recently, the link to the internet was the most vulnerable point of connectivity. DDoS attacks frequently targeted the service provider’s core network on the Gi Link. These attacks were generally volumetric in nature and were relatively easy to block with highly scalable firewalls and DDoS mitigation systems.
The Expanding Attack Surface
The threat landscape is rapidly changing, and attacks can come from other points of connectivity. This has been theoretical until recently; while numerous academic research papers have been published in the past decade suggesting that attacks from partner networks or radio access networks (RANs) were a possibility, those threats are no longer merely an intellectual exercise: they are real. At the same time, the rapid rise of IoT is exposing the threat of malicious actors taking control and weaponising devices against a service provider.
Multiple botnets, such as WireX and its variants, have been found and taken down. So far, these attacks have targeted hosts on the internet, but it’s just a matter of time until they start attacking Evolved Packet Core (EPC) components.
There are multiple weak points in EPC and its key components. Components that used to be hidden behind proprietary and obscure protocols now reside on IP, UDP, or SCTP, which can be taken down using simple DoS attacks.
The attack surface is significantly larger than it used to be, and legacy approaches to security will not work.
A DDoS Attack, like a signaling storm, against an individual entity can be generated by a malicious actor or even a legitimate source. For example, a misbehaving protocol stack in an IoT device can cause an outage by generating a signaling storm.
Securing the SP Network
To secure the SP Network, businesses must improve their defences against DDoS attacks. The best way to achieve this is by utilising an S/Gi Firewall solution and a DDoS mitigation solution. TPS should also be deployed in your enterprises’ IT Security on-premise and cloud infrastructures. With all of these solutions in place it becomes easier to mitigate multi-terabit attacks.
Utilising powerful tools that can improve these defences, can help detect and mitigate, or stop, a number of advanced attacks specifically against EPC. The tools being used should also allow for a granular deep packet inspection to protect against user impersonation by means of spoofing, network impersonation, and signalling attacks to security professionals.
To summarise, in addition to mitigating and stopping terabit-scale attacks coming from the internet and utilising stateful firewall services, it is imperative for enterprises to up their security measures by using full-spectrum security that protect the whole infrastructure of your business.
To support these objectives, many organisations are investing to build a strong DevOps function, which has responsibility for the whole development process. And these teams need to maintain an operation that works at optimal performance: to have the capacity to change rapidly, to experiment, to maintain quality and to continue to innovate.
Dev teams must be committed to ongoing self-evaluation and embrace the need to change when necessary. So, let’s look at how teams can keep it fresh and maintain the highest performance levels.
For us, the key to success is to look at three primary areas; people, processes and technology.
People
The developer’s role is constantly evolving. Where testers, developers and quality assurance (QA) teams once had distinctly different jobs to do, the functions are quickly merging and developer teams are now tasked both with making apps and services and ensuring they are of the highest quality.
In this new world, skills are crucial. And to mature DevOps practices, the responsibilities and skills of developers need to match the product team objectives.
It’s important that organisations are committed to upskilling their developers - and teams must continuously sharpen their own skillsets too, towards the goal of automating key activities during the DevOps pipeline. As always, speed is crucial and teams must focus their efforts on improving the processes that are slowing them down. For most, this means testing and test automation.
To achieve better test and test automation the savviest firms recognise the need to tackle skills gaps. For some, this means facilitating the mentoring of testers by highly qualified developers. And for others it means considering a change to software development practices to include ATDD (Acceptance Test Driven Development.) ATDD promotes defining tests as code is written. Test automation becomes a core practice during feature development rather than after feature development. Depending on team skills, implementing BDD (Behaviour Driven Development) which often implements test automation with simple English like syntax, serves less technical teams extremely well. There are bound to be blind spots between developer, business and test personas - and choosing development practices matched to team skills can contribute to accelerating development velocity.
Leadership is another critical aspect of success in DevOps and continuous testing. Diverse teams and personas call for strong leadership as a unifying force, and a leader’s active role in affecting change is crucial. Of course, part of leadership is to enforce stringent metrics and KPIs which help to keep everyone on track. The continuous measurement of both development productivity (e.g. the number of user stories per sprint) and quality measurements (e.g the number of defects per release) are imperative. Teams must monitor the number of broken builds, together with the overall number manual and automated tests performed, in order to keep quality high.
Process
Teams must always work to ‘clean up’ their code and to do it regularly. That includes more than just testing. Code refactoring - the process of restructuring computer code - is important for optimal performance - as is continual scanning for security holes. And it also includes more than just making sure production code is ‘clean’. It’s crucial to ‘treat test code as production code’ and maintain that too. Good code is always tested and version controlled.
Code can be cleaned and quality ensured through several key activities. The first is through code reviews and code analysis; making make sure code is well maintained and that there are no memory leaks. Developers must be committed to eliminating and fixing code that slows them down - asking themselves ‘is our testing code blocking the continuous integration workflow?, and ‘what are the common recurring quality issues throughout our CI jobs?’. Finding these issues and addressing them as an ongoing process, is the key to success.
Using dashboards, analytics and other visibility enablers can help power real time decision making which is based on concrete data - and which can help teams deliver quicker and more accurately.
Finally, continuous testing by each feature teams is important. Often, a team is responsible for specific functional components along with testing, and so testing code merges locally is key to detect issues earlier. Only then can teams be sure that, once a merge happens into the main branch, the entire product is not broken, and that the overall quality picture is kept consistent and visible at all times.
Technology
So, releasing a product to market with the quality and the velocity needed to make it a success requires teams of people that have varying skill sets and that use the right processes for both their objectives and their abilities. When there is a mismatch between the technology and the processes or people – they simply won’t be able to meet their objectives
A primary technology in development is the lab itself. A test environment is the foundation of the entire testing workflow, including all continuous integration activities. It perhaps goes without saying that when the lab is not available or unstable, the entire process breaks. The test environment supports all tests - mobile, desktop web and others - and is crucial to ensuring quality.
For many, the requirement for speed and quality means a shift to open-source mobile test automation tools. But, as with many free and open-source software markets, a plethora of tools complicates the selection process. Choosing an ideal framework isn’t easy, and there are material differences between the needs of developers and engineers, which must be catered for.
A developer’s primary objective is for fast feedback for their localised code changes. To do this, they need to run unit tests from a local development environment. Frameworks like Espresso, XCUITest and Phantom JS or Headless Chrome are good options for this. A test engineer, on the other hand, concentrates on the integration of various developers into a complete packaged product, and for that, their end-to-end testing objectives require different frameworks, like Appium, Selenium or Protractor. And production engineers introduce yet another primary focus – continuously executing end-to-end tests to identify and resolve service degradations before the user experience is impacted. Frameworks such as Selenium or Protractor are also relevant here but the integration with monitoring and alerting tools becomes essential to fit into their workflow.
With such different needs, many organisations opt for a hybrid model, where they use several frameworks in tandem.
DevOps tool chains serving multiple roles are often challenged to provide a single integrated view on quality. Tools that deliver quality visibility are increasingly required to handle a big data challenge to help teams understand whether they are on track at any given time. Dealing with various CI jobs, builds, test integration suites, and large-scale executions on multiple platforms is hard. Having the right technology in place is a great step toward efficient DevOps pipeline management.
-------
So, ultimately, we believe that for teams to perform optimally they must continually evaluate status quo, be committed to ‘spring cleaning’ and staying fresh. This encompasses everything from cleaning code, to re-evaluating team skills to choosing the right technology to meet current and changing needs. Only by doing this can teams achieve accelerated delivery while ensuring quality - crucial in today’s hyper-competitive landscape.