There’s some brilliant content in the November issue of Digitalisation World, covering everything from data centres, IT security and virtualisation, right through to IOT, DevOps and automation. While the individual articles are all worth reading for the invaluable insights they provide on so many aspects of the IT side of business in the digital age, what struck me most as I was putting together this issue is the sheer breadth of technologies that are now becoming a part of the everyday landscape. Also, the need to strike a delicate balance between the old and the new to achieve the optimum IT environment required to underpin a modern organisation.
So, anyone tasked with putting together some kind of a digital business strategy needs to know about approximately 15-20 IT-related technologies. That’s to say, this individual has to understand what each of these disciplines can do for the company, and then figure out how they will all work together to provide the best possible overall IT infrastructure solution. And, of course, this landscape is forever on the move – no one can say with any degree of certainty what the IT and/or business world will look like in three or four years’ time.
There’s a world of exciting possibilities out there, but I do not envy the company board members who are detailed with the responsibility of putting together a digitalisation plan. That’s not to say that sitting on your hands and doing nothing is an option, but the ‘think twice, act once’ maxim has never been more true. And, however much your IT specialists resist, you need to ensure that, just as the end goal of IT is to have one, flexible pool of compute resource, you can create one, integrated pool of IT expertise across the organisation, whereby everyone looks to help everyone else, and doesn’t seek to protect their own specialist areas.
IT spending in EMEA is projected to total $1 trillion in 2018, an increase of 4.9 percent from estimated spending of $974 billion in 2017, according to the latest forecast by Gartner, Inc.
In 2017, however, all categories of IT spending in EMEA underperformed global averages (see Table 1). Currency effects played a big part in the weakness in 2017, and will also contribute to the strength forecast in 2018.
Table 1. EMEA IT Spending Forecast (Millions of U.S. Dollars)
2017 Spending | 2017 Growth (%) | 2018 Spending | 2018 Growth (%) | |
Data Center Systems | 44,497 | 1.1 | 45,890 | 3.1 |
Enterprise Software | 96,091 | 7.6 | 106,212 | 10.5 |
Devices | 167,579 | 2.6 | 174,246 | 4.0 |
IT Services | 269,059 | 2.5 | 286,162 | 6.4 |
Communications Services | 396,419 | -0.6 | 409,158 | 3.2 |
Overall IT | 973,645 | 1.6 | 1,021,668 | 4.9 |
Source: Gartner (November 2017)
"The U.K. has EMEA’s largest IT market and its decline of 3.1 percent in 2017 impacts the forecast heavily," said John-David Lovelock, research vice president at Gartner. "Weak Sterling and political uncertainty since Brexit are reducing U.K. IT spending in 2017, while other major IT markets in EMEA grew steadily."
Another significant currency effect is the rapid appreciation of the Euro against the U.S. Dollar— it provides an incentive for Eurozone countries to defer IT spending to 2018 where possible, in anticipation of even lower prices in U.S. Dollars.
"However, there is more to the recovery in 2018 than just currency effects," added Mr. Lovelock. "Strong demand in the enterprise software and IT services categories across EMEA hint at significant shifts in IT spending patterns."
"The forecast highlights that businesses are broadly reducing spending on owning IT hardware, and increasing spending on consuming IT as-a-service," said Mr. Lovelock. "In the total IT forecast the business trends are masked somewhat by consumer spending, but when we look at enterprise-only spending the new dynamics between the categories are much clearer."
Significant Shifts in Enterprise IT Spending
EMEA enterprise IT spending* in 2017 was weaker than the overall IT spending forecast, declining 1.4 percent. The only category predicted to show enterprise spending growth in 2017 is the enterprise software market at 3.2 percent.
"In 2017, we’re seeing a pause in EMEA enterprise spending due to the switch to as-a-service offerings gaining momentum," said Mr. Lovelock. "Among the spending rebounds in 2018, however, we expect lagging markets. The data center, devices and communication services categories are all on pace to decline 3 percent or more in 2017. Despite improvements in 2018, spending on servers, storage, network equipment, printers, PCs, mobile devices — and even hardware support — won’t recover to 2016 levels."
In 2018, total enterprise spending in EMEA is on pace to grow 2.8 percent. All categories of enterprise IT spending will return to growth in 2018, but only IT services and software will grow strongly at 4 percent and 7.6 percent, respectively. Enterprise spending on devices and communications services continue to fall behind in 2018, growing at 2 percent or lower, thus failing to recoup the losses of 2017.
Gartner’s recent public cloud forecast further underlines this change in spending as businesses increasingly adopt cloud models for efficiency and agility. In doing so, they also shift their IT spending toward operational expenditure (opex) service-based models.
"The move to cloud services and opex spending on IT should serve to stabilize the growth in overall IT spending in EMEA in 2018 and beyond. We expect spending will spread out more evenly with fewer spikes of capital investment on hardware," said Mr. Lovelock. "In both enterprise and overall IT spending forecasts, worldwide and in EMEA, we forecast IT spending from 2019 through 2021 will remain close to a 3 percent growth rate each year."
International Data Corporation (IDC) has published its worldwide information technology (IT) industry predictions for 2018 and beyond.
IDC has spent the past ten years chronicling the emergence and evolution of the 3rd Platform, built on cloud, mobile, big data/analytics, and social technologies and further enabled by Innovation Accelerators such as the Internet of Things (IoT) and augmented and virtual reality (AR/VR). Over the past several years, IDC has focused on the digital transformation (DX) that enterprises must undergo as they adopt these technologies and the DX economy that is emerging from the innovative use of these technologies.
Today, the 3rd Platform story is well into its second chapter and moving into full stride, unleashing "multiplied innovation" through platforms, open innovation ecosystems, massive data sharing and modernization, hyper-agile application deployment technologies, an expanding digital developer population, the blockchain-fueled rise of digital trust, richer artificial intelligence (AI) solutions and services, deeper human/digital interfaces, and a much more diverse cloud services world.
"We said last year that the rising digital economy means all enterprises must operate like 'digital native' enterprises, rearchitecting their operations around large-scale digital innovation networks and becoming, in effect, a new corporate species," said Frank Gens, Senior Vice President and Chief Analyst at IDC. "While some of our predictions for 2018 and beyond continue to lay out a strategic blueprint for enterprises on their digital transformation journey, others introduce critical new building blocks for becoming digital native enterprises."
A closer look at IDC's top ten worldwide IT industry predictions reveals the following:
1. By 2021, at least 50% of global GDP will be digitized, with growth in every industry driven by digitally-enhanced offerings, operations, and relationships. This is the ticking clock that is (or should be) driving every organization to move quickly along its digital transformation journey. Organizations that are slow to digitize their offerings and operations will find themselves competing for a progressively shrinking share of their market segment's opportunities. And the time frame is short: organizations must make significant progress in transforming to a digital native model over the next three years.
2. By 2020, 60% of all enterprises will be in the process of implementing a new IT foundation as part of a fully articulated organization-wide DX platform strategy. IDC defines this new "DX Platform" as the future enterprise IT architecture that will enable the rapid creation of digital products, services, and experiences, while at the same time aggressively modernizing the internal "intelligent core." The DX Platform will be externally facing, API enabled, and data driven. Organizations that can re-architect for scale on a DX Platform will be the most likely to emerge as digital native enterprises in the near term.
3. By 2021, enterprise spending on cloud services and infrastructure will be more than $530 billion and over 90% of enterprises will use multiple cloud services and platforms. Adopting the cloud is no longer primarily about economics and agility – it is becoming enterprises' most critical and dependable source of sustained technology innovations. Cloud resource management and integration of resources across multiple cloud platforms will become critical capabilities for IT organizations on their DX journey. The cloud environment will also expand to the edge, with more than 50% of companies in consumer-facing industries spending aggressively to upgrade their cloud resources in edge locations.
4. By 2019, 40% of digital transformation initiatives will use AI services; by 2021, 75% of commercial enterprise apps will use AI.An "AI war" is looming as the major public cloud service providers offer an ever-expanding variety of AI-powered services. Digital services and apps without "AI inside" will quickly fall behind competitors' pace of innovation. Developers will be the critical population to watch as IT organizations must acquire AI engineers and data scientists to support the large number of DX initiatives that are AI dependent.
5. By 2021, enterprise apps will shift toward hyper-agile architectures, with 90% of application on cloud platforms (PaaS) using microservices and cloud functions and over 95% of new microservices deployed in containers. As enterprises create new services for the digital economy, they will need to deploy a rapidly increasing portion of them in an entirely new way: taking advantage of the scalability, flexibility, and portability of an emerging set of "hyper-agile" application deployment technologies and approaches. These new foundation technologies and approaches will enable dramatic growth (10x) in the number of applications and microservices, driven by a new generation of hyper-verticalized digital solutions.
6. By 2020, human-digital interfaces will diversify, with 25% of field-service technicians and information workers using augmented reality and nearly 50% of new mobile apps using voice as the primary interface. IDC believes augmented reality (AR) will revolutionize the role of field service worker by offering a rich set of options including image overlay, access to technical updates, and visual communication with supervisors and subject matter experts. Similarly, AR will fundamentally change the way information workers collaborate and interact with digital information. And voice is already well on its way to becoming the default interface for a wide range of enterprise smartphone apps.
7. By 2021, at least 25% of the Global 2000 will use blockchain services as a foundation for digital trust at scale. At the core of blockchain is distributed ledger technology (DLT) that offers the potential to support digital trust at scale by providing one version of the truth (secure information), transfer of value (secure ownership records), faster settlements, and smart contracts (automated buying and selling). IDC expects blockchain ledgers and interconnections to evolve at a slow and steady pace over the next 36 months. Early adopters will have the opportunity to establish very strong positions in the ecosystem, while slower adopters will not be entirely boxed out but should be exploring use cases.
8. By 2020, 90% of large enterprises will generate revenue from data-as-a-service.
Enterprises' ability to create, derive, and manage high-value data for their own use – and gain financial leverage by packaging some of that data for the marketplace – will quickly become an important metric in enterprise valuations. Similarly, relevant and high-value data will be a key component determining an enterprise's value and power in the world of digital developers and ecosystems. Establishing a critical mass of external data feeds will also be a critical ingredient for any AI-based digital services and solutions.
9. Improvements in simple (low-code/no-code) development tools will dramatically expand the number of non-tech developers over the next 36 months.
Low-code/no-code software accelerates the development process and gives business stakeholders an enriched toolbox for using technology to solve business problems. This means that an expanding population of business (non-IT)-rooted developers will be able to create increasingly sophisticated digital innovations. Successful enterprises will tap into that potential by maximizing access to these tools and propagating an "everyone's a developer" culture.
10. By 2021, more than half of the Global 2000 will see an average of one third of their digital services interactions come through their open API ecosystems, up from virtually 0% in 2017.
The creation of open APIs – and developer ecosystems around them – will allow enterprises to massively scale distribution of their digital platforms and services through third-party digital innovators, accelerating adoption and revenue. Over the next 36 months, IDC expects DX leaders will start to put much greater strategic focus and investment into their open API-based external developer ecosystems and distribution networks. Companies that fail to take this step will be on a fast track to marginalization in the DX economy.
Fear of cyber attack and new European legislation is prompting customers to increasingly seek support through managed services, driving significant growth in the European channel. This is according to a new report by 2112 Group on behalf of Barracuda MSP, entitled The State of European Managed Services.
Some of the top line findings:Security is now the foundation of the European Managed Services sector, creating significant revenue for the channel
Although fear of cyber attack and concerns around GDPR are prevalent, so is the opportunity to capitalise as managed services now form the basis for the channel’s underlying business model. More than three-quarters (76%) said recurring revenue services (managed services) were the top contributor to their growth in 2016.20 per cent of European solution providers offer some form of managed security services. More significantly, another quarter are planning to add security services within the next 12 months, and 37 per cent are exploring security as an expansion option. Conversely, less than one in five solution providers have no plans to offer security services.
We found most providers of managed services with security in their portfolio offer firewall management, data loss prevention (DLP), and endpoint security services. These align to just the basic needs for Europe’s staunch data protection regulations. Few companies are yet to offer more advanced technologies such as vulnerability management, encryption, and security information and event management (SIEM) services.
In other words, only a small number of service providers are offering comprehensive suites of security technologies; most appear to provide point solutions that are tangential to their core managed services or product offerings, and remain largely focused on addressing only immediate needs. Solution providers don’t seem eager to create end-to-end security services that address a multitude of technologies and threat mitigation needs, which could be proving a missed opportunity.Nearly all (91%) channel partners are taking advantage of the market conditions, offering some form of managed services and earning at least 10% of their revenue from recurring revenue engagements.
With security and cloud services as the underlying catalysts,this study is evidence that the recurring revenue model will eventually permeate all aspects of the European channel. Whilst the market is rife with opportunity, solutions providers that don’t soon have solid recurring revenue models in place will find themselves at a competitive disadvantage.This survey notes that the average channel company growth is reaching up to 20% annually. The businesses in this survey grew, on average, 17.5% in 2016.
According to the International Data Corporation (IDC) Worldwide Quarterly Cloud IT Infrastructure Tracker, vendor revenue from sales of infrastructure products (server, storage, and Ethernet switch) for cloud IT, including public and private cloud, grew 25.8% year over year in the second quarter of 2017 (2Q17), reaching $12.3 billion.
Public Cloud infrastructure revenue grew 34.1% year over year and now represents 33.5% of total worldwide IT infrastructure spending at $8.7 billion, up from a 27.0% share one year ago. Private Cloud revenue reached $3.7 billion for an annual increase of 9.9%. Total worldwide cloud IT infrastructure revenue has almost tripled in the last four years, while the traditional (non-cloud) IT infrastructure revenue continues to decline and is down 3.8% from a year ago, although it still represents 52.4% of the worldwide share of overall IT revenue at $13.6 billion for the quarter. Public Cloud now represents 70.2% of the total cloud IT infrastructure revenue. The market with the highest growth in the public cloud infrastructure space was Enterprise Storage Systems with revenue up 30.4% compared to the same quarter of the previous year, and making up over a third of the revenue in public cloud. Server and Ethernet Switch public cloud IT infrastructure revenues were up 24.6% and 26.8% respectively. Private cloud infrastructure spending continues to be driven by the server market, which has remained nearly 60% of the revenue in that space for the past 18 quarters.
"The strength in public cloud growth continued at an accelerated pace through the first half of 2017," said Kuba Stolarski, research director for Computing Platforms at IDC. "We have already reported that most of this growth is being driven by Amazon. However, it is important to remember that many of the other hyperscalers – Google, Facebook, Microsoft, Apple, Alibaba, Tencent, and Baidu – are preparing for their own expansions and Skylake/Purley refreshes of their infrastructure. At the same time, IDC is still seeing steady growth in the lower tiers of public cloud, and continued growth in private cloud on a worldwide scale. In combination, these infrastructure growth segments should more than offset the declines in traditional deployments for the remainder of 2017 and well into next year."
Except for Latin America revenue that declined 13.1% from a year ago, all other regions in the world experienced double-digit revenue growth in the Cloud IT Infrastructure space compared to last year. Asia/Pacific (excluding Japan) and Western Europe led growth with rates of 30.5% and 33.4%, respectively. Canada (25.1%), Middle East & Africa (28.4%) and the United States (24.8%) had annual growth in the mid-twenties, while Central and Eastern Europe (16.9%) and Japan (10.4%) growth was below 20% but still double digit.
Top 5 Companies, Worldwide Cloud IT Infrastructure Vendor Revenue, Q2 2017 (Revenues are in Millions, Excludes double counting of storage and servers) | |||||
Vendor Group | 2Q17 Revenue (US$M) | 2Q17 Market Share | 2Q16 Revenue (US$M) | 2Q16 Market Share | 2Q17/2Q16 Revenue Growth |
1. Dell Inc* | $1,456 | 11.8% | $1,534 | 15.7% | -5.0% |
1. HPE/New H3C Group* ** | $1,365 | 11.1% | $1,437 | 14.7% | -5.0% |
3. Cisco | $1,014 | 8.2% | $888 | 9.1% | 14.3% |
4. Huawei* | $380 | 3.1% | $292 | 3.0% | 30.2% |
4. NetApp* | $314 | 2.5% | $254 | 2.6% | 23.6% |
4. Inspur* | $275 | 2.2% | $189 | 1.9% | 45.8% |
ODM Direct | $5,439 | 44.1% | $3,321 | 33.9% | 63.8% |
Others | $2,088 | 16.9% | $1,886 | 19.2% | 10.7% |
Total | $12,332 | 100.0% | $9,800 | 100.0% | 25.8% |
IDC's Quarterly Cloud IT Infrastructure Tracker, Q2 2017 October 2017 |
Notes:
* IDC declares a statistical tie in the worldwide cloud IT infrastructure market when there is a difference of one percent or less in the vendor revenue shares among two or more vendors.
** Due to the existing joint venture between HPE and the New H3C Group, IDC will be reporting external market share on a global level for HPE as "HPE/New H3C Group" starting from Q2 2016 and going forward.
IDC's Worldwide Quarterly Cloud IT Infrastructure Tracker is designed to provide clients with a better understanding of what portion of the server, disk storage systems, and networking hardware markets are being deployed in cloud environments. This tracker will break out vendors' revenue by the hardware technology market into public and private cloud environments for historical data and provide a five-year forecast by the technology market.
According to a new study from International Data Corporation (IDC), IT spend in Western Europe will total $453.8 billion in 2017, a 2.7% increase compared with 2016. IT spending will continue to grow at a 2.0% five-year CAGR in 2021. Investments in 3rd Platform solutions and Innovation Accelerator technologies — such as augmented reality/virtual reality (AR/VR), artificial intelligence (AI) and cognitive, robotics, 3D printing, and Internet of Things (IoT) — will drive demand as companies strive to innovate, increase customer experience, and streamline business processes.
In 2017, consumer, banking, and discrete manufacturing will be the vertical markets with the biggest IT spending, accounting for over a third of overall Western European spend. IDC forecasts that retail, professional services, and telecommunications will be the fastest growing markets in 2017, and they will continue to lead in 2018. In the long term, professional services, retail, and process manufacturing will generate the fastest 2017–2021 CAGR.
"Traditional technologies such as mobility, social media, cloud, and Big Data helped companies to introduce change and move from a traditional approach to a more digitized one. With next-generation technologies, companies will go the extra mile, move one step ahead of the competition, and fully embrace digital transformation. This will be a win-win game, from which both businesses and their customers will benefit as companies introduce a more advanced approach into their businesses, optimizing processes and bringing extreme automation. On the other hand, the large amount of data that customers produce will allow companies to understand what they must focus on. This will result in more personalized products or services and increased customer satisfaction," said Andrea Minonne, research analyst, IDC European Industry Solutions, Customer Insight, and Analysis.
The worldwide public cloud services market revenue is projected to grow 18.5 percent in 2017 to total $260.2 billion, up from $219.6 billion in 2016, according to Gartner, Inc.
"Final data for 2016 shows that software as a service (SaaS) revenue was far greater in 2016 than expected, reaching $48.2 billion," said Sid Nag, research director at Gartner. "SaaS is also growing faster in 2017 than previously forecast, leading to a significant uplift in the entire public cloud revenue forecast."
SaaS revenue is expected to grow 21 percent in 2017 to reach $58.6 billion (see Table 1.) The acceleration in SaaS adoption can be explained by providers delivering nearly all application functional extensions and add-ons as a service. This appeals to users because SaaS solutions are engineered to be more purpose-built and are delivering better business outcomes than traditional software is
"Strategic adoption of platform as a service (PaaS) offerings is also outperforming previous expectations, as enterprise-scale organizations are increasingly confident that PaaS will be their primary form of application development platform in the future," said Mr. Nag. "This accounts for the remainder of the increase in this iteration of Gartner's public cloud services revenue forecast."
The highest revenue growth will come from cloud system infrastructure services (infrastructure as a service [IaaS]), which is projected to grow 36.6 percent in 2017 to reach $34.7 billion.
Table 1. Worldwide Public Cloud Services Revenue Forecast (Billions of U.S. Dollars)
| 2016 | 2017 | 2018 | 2019 | 2020 |
Cloud Business Process Services (BPaaS) | 39.6 | 42.2 | 45.8 | 49.5 | 53.6 |
Cloud Application Infrastructure Services (PaaS) | 9.0 | 11.4 | 14.2 | 17.3 | 20.8 |
Cloud Application Services (SaaS) | 48.2 | 58.6 | 71.2 | 84.8 | 99.7 |
Cloud Management and Security Services | 7.1 | 8.7 | 10.3 | 12.0 | 13.9 |
Cloud System Infrastructure Services (IaaS) | 25.4 | 34.7 | 45.8 | 58.4 | 72.4 |
Cloud Advertising | 90.3 | 104.5 | 118.5 | 133.6 | 151.1 |
Total Market | 219.6 | 260.2 | 305.8 | 355.6 | 411.4 |
Source: Gartner (October 2017)
Although public cloud revenue is growing more strongly than initially forecast, Gartner still expects growth even out from 2018 onwards. This stabilization reflects the increasingly mainstream status and maturity that public cloud services will gain within a wider IT spending mix.
"As of 2016, approximately 17 percent of the total market revenue for infrastructure, middleware, application and business process services had shifted to cloud," said Mr. Nag. "Through 2021, this will increase to approximately 28 percent."
In terms of vendor share, Gartner expects 70 percent of public cloud services revenue to be dominated by the top 10 public cloud providers through 2021. "In the IaaS segment, Amazon, Microsoft and Alibaba have already taken strong positions in the market," said Mr. Nag. "In the SaaS and PaaS segments, we are seeing cloud's impact driving major software vendors such as Oracle, SAP and Microsoft from on-premises, license-based software to cloud subscription models."
Software-defined storage (SDS) is one of several new technologies that are rapidly penetrating the IT infrastructure of enterprises and cloud service providers. SDS is gaining traction because it meets the demands of the next-generation datacenter much better than legacy storage infrastructure. As a result, International Data Corporation (IDC) forecasts the worldwide SDS market will see a compound annual growth rate (CAGR) of 13.5% over the 2017-2021 forecast period, with revenues of nearly $16.2 billion in 2021.
Enterprise storage spending has already begun to move away from hardware-defined, dual-controller array designs toward SDS and from traditional on-premises IT infrastructure toward cloud environments (both public and private) based on commodity Web-scale infrastructure. SDS solutions run on commodity, off-the-shelf hardware, delivering all the key storage functionality in software. Relative to legacy storage architectures, SDS products deliver improved agility (including faster, easier storage provisioning), autonomous storage management capabilities that lower administrative costs, and the ability to use lower-cost hardware.
"For IT organizations undergoing digital transformation, SDS provides a good match for the capabilities needed – flexible IT agility; easier, more intuitive administration driven by the characteristics of autonomous storage management; and lower capital costs due to the use of commodity and off-the-shelf hardware," said Eric Burgener, research director for Storage at IDC. "As these features appear more on buyers' lists of purchase criteria, enterprise storage revenue will continue to shift toward SDS."
Within the SDS market, the expansion of three key sub-segments – file, object, and hyperconverged infrastructure (HCI) – is being strongly driven forward by next-generation datacenter requirements. Of these sub-segments, HCI is both the fastest growing with a five-year CAGR of 26.6% and the largest overall with revenues approaching $7.15 billion in 2021. Object-based storage will experience a CAGR of 10.3% over the forecast period while file-based storage and block-based storage will trail with CAGRs of 6.3% and 4.7%, respectively.
Because hyperconverged systems typically replace legacy SAN- and NAS-based storage systems, all the major enterprise storage systems providers have committed to the HCI market in a major way over the past 18 months. This has made the HCI sub-segment one of the most active merger and acquisition markets as these providers prepare to capture anticipated SAN and NAS revenue losses to HCI as enterprises shift toward SDS solutions.
For 82 percent of Europe, Middle East and Africa (EMEA) CIOs digital business has led to a greater capacity for change and a more open mindset in their IT organization, according to Gartner, Inc.'s annual survey of CIOs. On average, EMEA CIOs have increased the amount of time they spend on business leadership — up from 30 percent three years ago to 41 percent today. This indicates that as digital transformation accelerates, the role of the CIO is changing.
"While IT delivery is still a responsibility of the CIO, achieving revenue growth and developing digital transformation were identified most often as top business priorities for organizations in 2018. If CIOs want to remain relevant, they need to align their activities with the business priorities of their organizations," said Andy Rowsell-Jones, vice president and distinguished analyst at Gartner.
Twenty-six percent of the CIO respondents in EMEA said they expect their jobs to become more business-oriented, and 22 percent expect a greater focus on analytics. They identified business intelligence and analytics (26 percent) and digitalization (17 percent) as technology areas that will help their businesses differentiate themselves and succeed.
The 2018 Gartner CIO Agenda Survey gathered data from a record 3,160 CIO respondents in 98 countries and all major industries, representing approximately $13 trillion in revenue/public sector budget and $277 billion in IT spending. 1,069 CIO respondents were from the EMEA region.
Digital Business Is Changing the CIO's Job
"'Digital' is here and is mainstream," said Mr. Rowsell-Jones. "CIOs are moving from experimentation to scaling their digital business initiatives." In this context, the challenge for CIOs is to grow these initiatives to deliver economies of scale and scope. The survey revealed that 29 percent of the CIO respondents in EMEA are designing digital initiatives, 29 percent are delivering them, 15 percent are scaling them and 4 percent are at the "harvesting" stage.
Some CIOs in EMEA are struggling to scale their digital business initiatives. The survey revealed that the biggest barrier is culture. Forty-eight percent of EMEA CIOs identified "culture" as the biggest hurdle to scaling up from the initial phases of digital business transformation.
"Culture is not specifically labeled," said Mr. Rowsell-Jones. "You can't change what you don't make explicit. Start by clearly articulating why change is required from a business point of view, then delve into what specifically will change."
The CIO Role Is Widening
Adoption of digital technology is increasingly forcing the role of the CIO to widen. Forty-six percent of the EMEA CIO respondents are in charge of the digital transformation within their organization, and 41 percent are responsible for innovation. Furthermore, many of EMEA CIOs said their organization has already deployed, or is experimenting with, digital security (76 percent), the Internet of Things (39 percent) and artificial intelligence (28 percent).
Putting in Place the Right Digital Team Structures
CIOs have a number of ways in which to develop digital business in their organization. The survey found that in EMEA, 47 percent of the CIO respondents have a dedicated digital business team. It also revealed that few of these teams (16 percent) are made up of IT associates only. For 47 percent of CIOs their digital business team will run as a separate digital team that reports to business unit leaders or to the CEO directly, and for 23 percent of them that team will report directly to the CIO.
"Your role as a CIO is transforming in light of the accelerating adoption of digital business and the fast pace of technological innovation," concluded Mr. Rowsell-Jones. "It no longer suffices just to be responsible for IT delivery, and it is of paramount importance to address broader business objectives as well. The time has come to master your new role as a business executive."
Gartner, Inc. has highlighted the top strategic technology trends that will impact most organizations in 2018.
Gartner defines a strategic technology trend as one with substantial disruptive potential that is beginning to break out of an emerging state into broader impact and use, or which are rapidly growing trends with a high degree of volatility reaching tipping points over the next five years.
"Gartner's top 10 strategic technology trends for 2018 tie into the Intelligent Digital Mesh. The intelligent digital mesh is a foundation for future digital business and ecosystems," said David Cearley, vice president and Gartner Fellow. "IT leaders must factor these technology trends into their innovation strategies or risk losing ground to those that do."
The first three strategic technology trends explore how artificial intelligence (AI) and machine learning are seeping into virtually everything and represent a major battleground for technology providers over the next five years. The next four trends focus on blending the digital and physical worlds to create an immersive, digitally enhanced environment. The last three refer to exploiting connections between an expanding set of people and businesses, as well as devices, content and services to deliver digital business outcomes.
The top 10 strategic technology trends for 2018 are:
Creating systems that learn, adapt and potentially act autonomously will be a major battleground for technology vendors through at least 2020. The ability to use AI to enhance decision making, reinvent business models and ecosystems, and remake the customer experience will drive the payoff for digital initiatives through 2025.
"AI techniques are evolving rapidly and organizations will need to invest significantly in skills, processes and tools to successfully exploit these techniques and build AI-enhanced systems," said Mr. Cearley. "Investment areas can include data preparation, integration, algorithm and training methodology selection, and model creation. Multiple constituencies including data scientists, developers and business process owners will need to work together."
Over the next few years, virtually every app, application and service will incorporate some level of AI. Some of these apps will be obvious intelligent apps that could not exist without AI and machine learning. Others will be unobtrusive users of AI that provide intelligence behind the scenes. Intelligent apps create a new intelligent intermediary layer between people and systems and have the potential to transform the nature of work and the structure of the workplace.
"Explore intelligent apps as a way of augmenting human activity and not simply as a way of replacing people," said Mr. Cearley. "Augmented analytics is a particularly strategic growing area which uses machine learning to automate data preparation, insight discovery and insight sharing for a broad range of business users, operational workers and citizen data scientists."
AI has become the next major battleground in a wide range of software and service markets, including aspects of enterprise resource planning (ERP). Packaged software and service providers should outline how they'll be using AI to add business value in new versions in the form of advanced analytics, intelligent processes and advanced user experiences.
Intelligent things are physical things that go beyond the execution of rigid programming models to exploit AI to deliver advanced behaviors and interact more naturally with their surroundings and with people. AI is driving advances for new intelligent things (such as autonomous vehicles, robots and drones) and delivering enhanced capability to many existing things (such as Internet of Things [IoT] connected consumer and industrial systems).
"Currently, the use of autonomous vehicles in controlled settings (for example, in farming and mining) is a rapidly growing area of intelligent things. We are likely to see examples of autonomous vehicles on limited, well-defined and controlled roadways by 2022, but general use of autonomous cars will likely require a person in the driver's seat in case the technology should unexpectedly fail," said Mr. Cearley. "For at least the next five years, we expect that semiautonomous scenarios requiring a driver will dominate. During this time, manufacturers will test the technology more rigorously, and the nontechnology issues such as regulations, legal issues and cultural acceptance will be addressed."
A digital twin refers to the digital representation of a real-world entity or system. Digital twins in the context of IoT projects is particularly promising over the next three to five years and is leading the interest in digital twins today. Well-designed digital twins of assets have the potential to significantly improve enterprise decision making. These digital twins are linked to their real-world counterparts and are used to understand the state of the thing or system, respond to changes, improve operations and add value. Organizations will implement digital twins simply at first, then evolve them over time, improving their ability to collect and visualize the right data, apply the right analytics and rules, and respond effectively to business objectives.
"Over time, digital representations of virtually every aspect of our world will be connected dynamically with their real-world counterpart and with one another and infused with AI-based capabilities to enable advanced simulation, operation and analysis," said Mr. Cearley. "City planners, digital marketers, healthcare professionals and industrial planners will all benefit from this long-term shift to the integrated digital twin world."
Edge computing describes a computing topology in which information processing, and content collection and delivery, are placed closer to the sources of this information. Connectivity and latency challenges, bandwidth constraints and greater functionality embedded at the edge favors distributed models. Enterprises should begin using edge design patterns in their infrastructure architectures — particularly for those with significant IoT elements.
While many view cloud and edge as competing approaches, cloud is a style of computing where elastically scalable technology capabilities are delivered as a service and does not inherently mandate a centralized model.
"When used as complementary concepts, cloud can be the style of computing used to create a service-oriented model and a centralized control and coordination structure with edge being used as a delivery style allowing for disconnected or distributed process execution of aspects of the cloud service," said Mr. Cearley.
Conversational platforms will drive the next big paradigm shift in how humans interact with the digital world. The burden of translating intent shifts from user to computer. The platform takes a question or command from the user and then responds by executing some function, presenting some content or asking for additional input. Over the next few years, conversational interfaces will become a primary design goal for user interaction and be delivered in dedicated hardware, core OS features, platforms and applications.
"Conversational platforms have reached a tipping point in terms of understanding language and basic user intent, but they still fall short," said Mr. Cearley. "The challenge that conversational platforms face is that users must communicate in a very structured way, and this is often a frustrating experience. A primary differentiator among conversational platforms will be the robustness of their conversational models and the application programming interface (API) and event models used to access, invoke and orchestrate third-party services to deliver complex outcomes."
While conversational interfaces are changing how people control the digital world, virtual, augmented and mixed reality are changing the way that people perceive and interact with the digital world. The virtual reality (VR) and augmented reality (AR) market is currently adolescent and fragmented. Interest is high, resulting in many novelty VR applications that deliver little real business value outside of advanced entertainment, such as video games and 360-degree spherical videos. To drive real tangible business benefit, enterprises must examine specific real-life scenarios where VR and AR can be applied to make employees more productive and enhance the design, training and visualization processes.
Mixed reality, a type of immersion that merges and extends the technical functionality of both AR and VR, is emerging as the immersive experience of choice providing a compelling technology that optimizes its interface to better match how people view and interact with their world. Mixed reality exists along a spectrum and includes head-mounted displays (HMDs) for augmented or virtual reality as well as smartphone and tablet-based AR and use of environmental sensors. Mixed reality represents the span of how people perceive and interact with the digital world.
Blockchain is evolving from a digital currency infrastructure into a platform for digital transformation. Blockchain technologies offer a radical departure from the current centralized transaction and record-keeping mechanisms and can serve as a foundation of disruptive digital business for both established enterprises and startups. Although the hype surrounding blockchains originally focused on the financial services industry, blockchains have many potential applications, including government, healthcare, manufacturing, media distribution, identity verification, title registry and supply chain. Although it holds long-term promise and will undoubtedly create disruption, blockchain promise outstrips blockchain reality, and many of the associated technologies are immature for the next two to three years.
Central to digital business is the idea that the business is always sensing and ready to exploit new digital business moments. Business events could be anything that is noted digitally, reflecting the discovery of notable states or state changes, for example, completion of a purchase order, or an aircraft landing. With the use of event brokers, IoT, cloud computing, blockchain, in-memory data management and AI, business events can be detected faster and analyzed in greater detail. But technology alone without cultural and leadership change does not deliver the full value of the event-driven model. Digital business drives the need for IT leaders, planners and architects to embrace event thinking.
To securely enable digital business initiatives in a world of advanced, targeted attacks, security and risk management leaders must adopt a continuous adaptive risk and trust assessment (CARTA) approach to allow real-time, risk and trust-based decision making with adaptive responses. Security infrastructure must be adaptive everywhere, to embrace the opportunity — and manage the risks — that comes delivering security that moves at the speed of digital business.
As part of a CARTA approach, organizations must overcome the barriers between security teams and application teams, much as DevOps tools and processes overcome the divide between development and operations. Information security architects must integrate security testing at multiple points into DevOps workflows in a collaborative way that is largely transparent to developers, and preserves the teamwork, agility and speed of DevOps and agile development environments, delivering "DevSecOps." CARTA can also be applied at runtime with approaches such as deception technologies. Advances in technologies such as virtualization and software-defined networking has made it easier to deploy, manage and monitor "adaptive honeypots" — the basic component of network-based deception
Gartner, Inc. has revealed its top predictions for 2018 and beyond. Gartner's top predictions will enable organizations to move beyond thinking about mere notions of technology adoption to focus on the issues that surround what it really means to be human in the digital world.
"Technology-based innovation is arriving faster than most organizations can keep up with. Before one innovation is implemented, two others arrive," said Daryl Plummer, vice president and Gartner Fellow, Distinguished. "CIOs in end-user organizations will need to develop a pace that can be sustained no matter what the future holds. Our predictions provide insight into that future, but enterprises will still be required to develop a discipline around how pace can be achieved. Those who seek value from technology-based options must move faster as their digital business efforts move into high gear. Speed of change will require variability of skills and capabilities to address rising challenges."
By 2021, early adopter brands that redesign their websites to support visual- and voice-search will increase digital commerce revenue by 30 percent.
Voice- and visual-search based queries improve marketers' understanding of consumers' interests and intent. Coupled with the additional contextual cues available from smartphones, early adopter brands and commerce sites will capitalize on consumers' shift to these search modalities. They will gain competitive advantage as measured in conversion rates, revenue growth, new customer acquisition, market share and customer satisfaction. Consumer demand for voice devices — embodied by products such as Amazon Echo and Google Home — is expected to generate $3.5 billion by 2021. Brands that are able to develop ways to leverage systems that can take a handoff, so to speak, from the devices will see rapid growth in digital commerce revenue.
By 2020, five of the top seven digital giants will willfully "self-disrupt" to create their next leadership opportunity.
In doing new things, digital giants — such as Alibaba, Amazon, Apple, Baidu, Facebook, Google, Microsoft and Tencent — are likely to run into situations where their influence has grown so large that it is difficult to create new value scenarios. This ultimately leads to self-disruption. In a self-disrupting strategy, disruption arises as intentional intent to get there first, even if it is necessary to disrupt yourself. While this can be risky, risk of inaction can be even higher.
"Research in Motion (RIM), for example, could have disrupted itself by delivering BlackBerry Messenger and the BlackBerry network to iPhones and Android phones. While they would have given up exclusive use of these capabilities — thus disrupting themselves — they would have created market space within competitive ecosystems to grow their influence rather than watching it decline," said Mr. Plummer. "The digital giants have a vested interest in innovation continuing to accelerate. Therefore, leaders in the digital space must continuously seek to create new opportunities, including self-disruption."
By the end of 2020, the banking industry will derive $1 billion in business value from the use of blockchain-based cryptocurrencies.
The current combined value of cryptocurrencies in circulation worldwide is $155 billon, and this value has been increasing as tokens continue to proliferate and market interest grows. Cryptocurrencies are more mature than the technical and business infrastructure that supports them. This is, in part, due to the lack of credibility tokenized developments have received from mainstream businesses. However, once banks start to see cryptocurrencies and digital assets in the same context as more traditional financial instruments, more distributed business value will begin to accrue. This requires every industry to rethink aspects of current fiat-based business models such as pricing of goods and services, accounting and tax methods, payment systems, and risk management capabilities to accommodate these new forms of value in their business strategies.
By 2022, most people in mature economies will consume more false information than true information.
"Fake news" has become a major worldwide political and media theme for 2017. While fake news is currently in the public consciousness, it is important to realize the extent of digitally created content that is not a factual or authentic representation of information goes well beyond the news aspect. For enterprises, this acceleration of content in a social media-dominated discourse presents a real problem. Enterprises need to not only monitor closely what is being said about their brands directly, but also in what contexts, to ensure they are not associated with content that is detrimental to their brand value.
By 2020, AI-driven creation of "counterfeit reality," or fake content, will outpace AI's abilityto detect it, fomenting digital distrust.
"Counterfeit reality" is the digital creation of images, video, documents, or sounds that are convincingly realistic representations of things that never occured or existed exactly as represented. In the past 30 years, the ability to create and disseminate content that has been subtly or overtly altered has greatly increased as huge numbers of people gained access to the internet with few controls on content distribution. The next wave of that distribution will be machine-generated content.
"The detection of counterfeit reality will best be accomplished by artificial intelligence (AI) which is able to identify and track markers in counterfeit content faster than human reviewers," said Mr. Plummer. "Unfortunately, as the creation of counterfeit reality using AI techniques has accelerated in recent years, using AI to detect counterfeit reality currently lags behind the use of AI to create it.
By 2021, more than 50 percent of enterprises will be spending more per annum on bots and chatbot creations than traditional mobile app developments.
User attention is shifting away from individual apps on mobile devices and splintering across emerging post-app technologies such as bots and chatbots. Today, chatbots are the face of AI and will impact all areas where there is communication between humans. Bots have the ability to transform the way apps themselves are built and the potential to change the way that users interact with technology. The appropriate use of bots is also likely to increase employee or customer engagement, as they can quickly automate tasks to free up the workforce for more nonstandard work, including question-and-answer interactions, when deployed as chatbots or virtual assistants.
By 2021, 40 percent of IT staff will be versatilists, holding multiple roles, most of which will be business, rather than technology-related.
IT specialists represent about 42 percent of the entire IT workforce in 2017, but by 2019, Gartner predicts that IT technical specialist hires will fall by more than 5 percent as digital business initiatives require increasing numbers of IT versatilists. This shift will begin in infrastructure and operations (I&O), as the need for I&O that can support on-demand infrastructure will emerge. With a solid I&O foundation in place, an increase in nontechnical IT managers and leaders with the versatilist profile will follow. After the leadership wave, marketing-oriented digital business efforts such as business intelligence (BI) will be next, followed by software development, digital product management, project/program/portfolio management, and customer experience management and architecture.
In 2020, AI will become a positive net job motivator, creating 2.3 million jobs while eliminating only 1.8 million jobs.
AI will eliminate more jobs than it creates through 2019, however, Gartner believes that the number of jobs created due to AI in 2020 is suffcient to overcome the deficit. Net job creation or elimination will vary greatly by industry; some industries will experience overall job loss, some industries will experience net job loss for only a few years; and some industries, such as healthcare and education, will never experience net job loss. AI will improve the productivity of many jobs, and, used creatively, it has the potential to enrich people's careers, reimagine old tasks and create new industries.
By 2020, IoT technology will be in 95 percent of electronics for new product designs.
The combination of smartphone management, cloud control and inexpensive enabling modules delivers sophisticated monitoring, management and control with minimal additional cost in the target device. Once this technology emerges, buyers will rapidly gravitate to Internet of Things (IoT)-capable products, and interest in and demand for IoT-enabled products will rapidly snowball. Every supplier must, at the very least, make plans to implement IoT technology into its products, for both consumer and business buyers.
Through 2022, half of all security budgets for IoT will go to fault remediation, recalls and safety failures rather than protection.
Risks related to the introduction of IoT as part of projects or initiatives are substantially impacted by the unintended consequences presented when the "pervasive digital presence" is introduced across all industries and market sectors as IoT growth expands. The requirement to update devices periodically, as is done with mobile phones and other remote systems, is multiplied by numerous factors, and the inability to perform those updates can result in massive product recalls. For industrial environments, scale and diversity may not be as significant, but the need to preserve safety for individuals, the environment and the rich regulatory regime that controls safety systems will ensure that the rapid expansion of use of IoT in those systems will result in regulatory impacts for securing those systems.
Denmark’s leading supermarket chain, Coop, constantly strives to offer top customer service both through reasonable prices and transactional experiences. Given the low-margin, competitive market it operates in, the retailer therefore needed to consider costs while looking at the performance of its infrastructure. Taking the budget set aside for the maintenance of its old server platform, Coop was instead able to implement VMware vSAN, without needing to ask for further investment from the board. The virtual storage solution underpins everything the retailer does – and has eliminated system downtime for a more seamless customer experience, all while streamlining costs.
Coop Denmark is the largest company in the Danish grocery sector with a market share of 42 percent. The retailer operates the chains Kvickly, Super, Dagli'Brugsen and coop.dk as well as subsidiaries Irma and Fakta. It has more than 1,200 stores across Denmark.
The Challenge As one of the leading supermarkets in Denmark, Coop faces a big challenge to be the number one choice for consumers in an over-saturated market. Its razorthin margins mean it has to find ways to ensure it is as operationally efficient as possible.
“We put a lot of effort into making our IT systems cost-effective. But we also focus on giving room for the individual supermarket to optimise its relationships with local customers," says Søren Vendler, IT enterprise architect at Coop.
“So we have a double trend of both centralising and standardising IT and yet, also decentralising and empowering the individual,” he adds.
The efforts to standardise led to Coop moving all of its 1,500 servers onto one single server platform – but this created some issues.
“At one point, every Thursday at 12pm for two minutes all of the systems would shut down – nothing would happen. Then, everything would run again, and we couldn’t figure out why,” says Vendler.
“We searched everywhere. Several weeks later we found the anti-virus system of all 1,500 servers would download a new pattern definition file at the same time, effectively shutting down the whole infrastructure,” he says.
Coop realised it couldn’t operate all of its servers on one storage system, so had to move some of its servers onto separate environments. For example, its customer loyalty programme became so highly loaded that customers were sometimes delayed excessively by the till, waiting for their personal membership transactions to complete.
“We had to move the loyalty programme out of our central storage system and put it on a dedicated storage system, causing an extra cost of around 1 million EURO,” says Vendler.
In order to streamline costs and rectify issues with the server, Coop decided to look at the market for new solutions, but it struggled to find an appliance that matched its specific needs.
Having worked with other VMware solutions for a number of years, the retailer decided to explore its storage product, vSAN, to see if it would fit the bill.
“We trusted VMware to provide the detailed design of a solution so we could be absolutely certain there were no inherent defects in it,” Vendler explains.
“VMware has certified a very broad range of individual hardware components, meaning we were able to put together a solution that fit exactly to Coop’s needs,” he adds.
A big reason VMware’s vSAN server platform was chosen was because the Coop team was able to finance its design and implementation simply by using the maintenance costs it had put aside for running its old physical infrastructure. “We didn’t have to go to the board for the investment – instead we were able to make use of the existing budget we’d put aside for the maintenance and upkeep of the physical system but use it to invest in a system that would ultimately help us reduce costs instead.”
Vendler notes that IT infrastructure projects of this size are normally very complex and time consuming but that this wasn’t the case with vSAN. “The vSAN project only took about three months and it went completely according to plan – that normally never happens,” he says.
The VMware vSAN server platform’s performance means the Coop infrastructure isn’t disrupted in any way like it used to be.
“Our new vSAN server platform is far more effective; it performs so much better now. It has helped to guarantee business continuity and we no longer need to have dedicated storage solutions for specific purposes,” he says.
vSAN will also help Coop continue to standardise and keep all of its servers running on a shared platform. In turn, this will mean it can cut costs.
But the use of vSAN extends beyond providing solutions for the problems Coop had in the past; it is now at the heart of everything the retailer does.
“VMware vSAN underpins every business process in Coop – all the way from the tills in the stores, through to the provisioning of goods from the warehouses into the stores and then back to all the headquarter processes,” says Vendler. “If vSAN doesn’t run, Coop doesn't run.”
Looking Ahead Coop Denmark isn’t looking to stand still in the years to come – it will continue to innovate and it will rely on vSAN to ensure it can do this without any issues.
“The retail sector is due for a lot of change in the near future. But one thing that we’re certain of is vSAN supporting us for many years to come,” Vendler says.
Moving forward, the new vSAN platform will enable some of its other businesscritical systems to perform better, which is vital as it aims to better personalise its customers’ shopping experiences in the years to come.
“It’s a critical platform for us. As we move further into making customer loyalty programmes and having individual solutions tailored to individual customers, the strain on our central systems will become more and more intense,” he says.
“vSAN is a solution to our challenges because it is able to provide both a very high-performance, in terms of IOs per second, and a very low cost of ownership,” he adds.
To find out more about VMware vSAN can help power your Hyper-Converged Infrastructure, click here
Although Microsoft is responsible for the security of Azure, you are responsible for that of all the apps and data you use within it. So what’s the best way to manage this? By Gigamon.
If you attended September’s Microsoft Ignite event in Orlando, you’ll know that Azure was one of the stars of the show. Microsoft Azure is a cloud computing service you can use to build, test, deploy and manage applications and services through a global network of Microsoft-managed data centres. It provides software-, platform- and infrastructure-as-a-service (SaaS, PaaS and IaaS), and supports many different programming languages, tools and frameworks, including both Microsoft-specific and third-party software and systems.
Azure has seen spectacular growth in recent times. But if you’re considering or already utilising its capabilities, you need to be aware of the significant responsibilities that your use of it or similar services such as Amazon Web Services (AWS) place on your organisation. And, if you’re an IT, cloud or security architect, you need to know what these responsibilities mean for you and your role.
As enterprises move to the public cloud to take advantage of scale, elasticity and availability, cloud architects and enterprise decision makers need to recognise the security expectations on their organisations. This is because IaaS cloud providers operate under a ‘shared responsibility’ model – the cloud provider is responsible for security of the cloud (i.e. of the cloud infrastructure), whereas the IaaS customer is responsible for security in the cloud (i.e. of their data, applications etc.).
As cloud and IT operational leaders evaluate IaaS to run mission-critical workloads, data security, industry and organisational compliance, detection and response to anomalies become essential requirements. And security needs to extend beyond just identity and access management to encompass a full network security stack.
While the IaaS cloud provider is responsible for the lower levels of this stack, including physical security, host infrastructure, and network controls for multi-tenancy; the cloud customer is responsible for implementing data governance, application and data security, application level controls, and client and endpoint protection. Based on this model, the security of the enterprise’s data and applications, along with organisational/regulatory compliance, rests on IT, cloud and security architects, who must ensure that applications and workloads are being deployed securely by everyone within the organisation.
Enterprises that migrate to the cloud typically rely on techniques like workload security, perimeter security, prevention-only solutions, and identity and access management to mitigate security risks. But today’s threat landscape means that prevention-only security techniques are insufficient. They need to be complemented with additional detection and response techniques to detect early signs of security anomalies and deviations from expected behaviour.
In particular, IT, cloud and security architects responsible for charting a cloud strategy for their enterprise must address the following questions before they can successfully deploy mission-critical applications in an IaaS public cloud such as Microsoft Azure:
Failure to comprehensively address these considerations can prevent or slow down the migration of mission-critical applications to the cloud, and leave an organisation vulnerable to potential security breaches, with severe consequences to reputation and brand. A well-defined cloud security architecture that accelerates application migration to the cloud is therefore essential.
For this to happen, organisations need to have accurate visibility into virtual machine network traffic in order to implement a multi-tiered security model. Without such visibility, the move of mission-critical applications to the cloud would be stunted. And a recent Vanson Bourne survey found that 90% of enterprises surveyed plan to approach cloud security in the same way they approach their on-premises security operations – with visibility into network traffic as a prerequisite to implementing an effective network security strategy in the cloud.
One way to achieve this is through the approach adopted by Gigamon in providing network traffic visibility for public cloud IaaS. When last year, the company introduced the industry’s first such solution for Amazon Web Services, many enterprises running workloads in the public cloud were quick to see the benefits of obtaining identical visibility across their on-premises and cloud environments.
Currently available in beta, and with general availability expected in Q1 2018, the Gigamon Visibility Platform for Microsoft Azure will enable enterprises to extend their security posture in an identical manner to Azure, assuring compliance, and accelerating the time to detect threats to mission-critical applications. The solution, which encompasses traffic acquisition, traffic aggregation, intelligence and distribution, and orchestration and management, will enable organisations to:
Whether they’re already using or considering a future migration to Azure, the solution will offer intelligent network traffic visibility for mission critical workloads, and enable enterprises to obtain complete network traffic visibility into virtual machines - an essential requirement for building multi-tiered security stacks. The platform will also integrate with Azure APIs and deploy a visibility tier in all VNets, to collect aggregated traffic and apply advanced intelligence prior to sending selected traffic to security tools. With it, organisations will therefore be able to obtain consistent visibility into their infrastructure across both Azure and their on-premises environment, and extend their security posture to Azure.
This platform-centric approach to visibility offers a range of possibilities to build a highly agile, scalable, robust and cost-effective security strategy, through symmetric capabilities available across multiple IaaS providers. So, whether an organisation wishes to use AWS for some workloads and Azure for others; run certain workloads in an OpenStack private cloud; or continue to run some workloads on a VMware ESX, VMware NSX or Cisco ACI infrastructure on-premises; they can still benefit from consistent visibility across all these diverse environments.
With a common visibility platform they can also centralise their tools in one location, and apply the necessary traffic intelligence in the visibility platform to dynamically steer traffic of interest regardless of where the tools reside – in AWS, in Azure or on-premises. Traffic could also cross regions or virtual networks and virtual private clouds, to enable security operations teams to implement centralised cloud and enterprise on-premises network traffic monitoring.
To learn more visit www.gigamon.com/azure.
Daren Oliver, managing director of Fitzrovia IT, believes that even the smallest of businesses can begin to harness the power of big data intelligence. Here, he discusses the latest trends in business analytics.
As an SME owner, you've no doubt heard the buzz around big data. But if the concept leaves you scratching your head, you're not alone. Unhelpfully, there's no universally accepted definition for the term, and it has evolved to encompass a number of approaches which vary from industry to industry, sector to sector.
As a very general guide, in the business world big data refers to the collection and analysis of customer data, whether that's digital information from social media or apps, or traditional information such as point-of-sale transactions. By joining up the dots, this 'big picture' data can help predict future consumer behaviour, allowing companies to better serve their customers and, ultimately, to boost their bottom line.
For example, in the car insurance industry there's a move away from the traditional measures of risk such as the age or model of a vehicle. By analysing big data, insurers have found that motorists who have taken out life insurance policies are the kind of people who are more likely to drive safely. Accurate prediction of risk means lower premiums for many, and higher costs for those who don't take out such policies. (Of course, it won't be lost on insurers that this method of analysis encourages drivers to take out life insurance – remember that comment about the bottom line?)
While large corporations can access the budget and the expertise to fund big data analytics, many smaller businesses fear that they are locked out of the revolution. Instead they are left to rely on their traditional business sense, scattered spreadsheets and old-fashioned intuition, and quite understandably they worry about being left behind.
A survey by MHR Analytics reported earlier this year that 76 per cent of UK companies were planning a big data or data analytics project within the next twelve months, but 70 per cent of C-suite executives said they were struggling to up-skill employees in order to make the projects effective. Worryingly, just 39 per cent of those executives were able to strongly agree with the statement 'our customer data is accurate and up-to-date.'
Yet there was one area where business execs were definitely able to agree. Nine in ten said that data analytics and business intelligence would be crucial to the future of their business.
The future is coming, and it's hurtling towards us at an astonishing pace. Innovation in technology is never-ending, and while for many this is a fascinating field, for non-techy business leaders the speed of change can be bewildering. It's no wonder that many SMEs doubt their company's ability to harness big data. Yet the longer they stall, the greater the risk of being beaten by savvier competitors.
If you’re one of those stalling businesses, where do you start? The good news is that with new software tools and the support of a good IT consultant, you could soon find yourself walking tall in those big data boots.
An ideal solution is to utilise well-known applications for managing your data. For example, Microsoft Excel can be connected to data stored in Hadoop providing you with an easy-to-use platform for accessing, viewing and summarising information. Likewise, Microsoft’s HDInsight can be connected to Azure cloud via a power query option.
Data from our planet continues to grow exponentially, whether that's from sensors, mobile tracking devices, posts to social media sites, emails, blogs or surveillance videos. Have a think about the data you've created today in your personal and business life, multiply that by many millions, and you get a sense of the wealth of information that's stored in cyber-space. Mining this information to boost your company's success might not be as daunting as you imagine. Welcome to the future!
Daren Oliver is managing director of Fitzrovia IT, a London-based consultancy that provides cutting-edge IT solutions from across the globe. For more information, visit www.fitzroviait.com.
DW talks to Kurt Kuckein, Director of Marketing at DDN, about data lifecycle management, with particular reference to the benefits of Object Storage.
1. Please can you provide some background on DDN – when/why formed, key personnel and key milestones to date?
DDN Storage was founded by Paul Bloch and Alex Bouzari; both highly successful IT leaders with 25+ years in founding and managing profitable, high growth technology companies. For almost 20 years now, DDN has committed to delivering the highest levels of customer satisfaction through extensive knowledge and deep experience with hardware, file systems, and applications to accelerate and scale business. Data-intense, global enterprises are leveraging the power of DDN technology and the deep technical expertise of our team to capture, store, process, analyse, collaborate and distribute data at the largest scale and in the most efficient, reliable and cost-effective manner.
In 2017, DDN delivered its first 100+ petabyte storage system. Large scale deployments such as these are becoming more commonplace, not just in the traditional High Performance Computing (HPC) market, but in web and cloud, AI and machine learning types of environments, and mobile applications.
DDN now has over four exabytes in production globally. Measurements used to be based on the capacity attribute of systems but that is now shifting to performance. DDN now has customers with massive performance attributes in their environments, which is spread over thousands of locations and tens of millions of users. That’s a sliver of performance per user or per location, but multiplied together, these new distributed systems are much larger than large HPC environments.
Other notable achievements in 2017 were the opening of a brand new business unit focused entirely on our Infinite Memory Engine (IME) technology; a scale-out, flash-native, software-defined, storage cache headed up by Jessica Popp as General Manager, and the venerated Eric Barton as Chief Technology Officer. We’ve also seen our Non-Volatile Memory (NVM) revenue grow to be a quarter of our business – up from 5% just a few years ago.
2. Please can you outline the DDN product/technology portfolio – you cover block, Flash, file and object storage?
That’s right; DDN delivers a comprehensive and seamless portfolio of storage technologies that provide an extremely flexible set of data lifecycle management tools that can be applied anywhere and at any scale. Our various pillars of technology can be connected together to solve end-to-end data lifecycle management challenges, enabling organisations to achieve peak efficiency and extract maximum value throughout the entire lifecycle of data.
DDN’s Infinite Memory Engine (IME) is designed from the ground up to be a scale-out flash-native, software-defined storage cache that streamlines the data path for application IO. Several key factors, both technological and commercial are creating demand for a new approach to high performance I/O. New non-volatile memory (NVM) device technologies are proliferating and media capacities are increasing rapidly.
A new generation of high-business-value markets are taking advantage of analytics and machine learning and further stressing performance boundaries. Parallel file systems only crudely manage Flash, and HDD performance degrades as concurrency increases making them the bottleneck as performance requirements grow. IME manages data differently, transforming tough workloads into NVMe optimised IO, accelerating a variety workloads to offer wirespeed performance, RDMA support, and linear scaling
Alongside this we have our Scaler file storage systems, which offer best-in-class analytics, parallel file system and NAS for the most data intensive and performance demanding requirements. And we have the world’s most scalable object storage-based technology, WOS, that enables secure, global multi-site collaboration, worldwide distribution of content, active archive and deep archive, real time replication, intelligent tiering, and bridges into public clouds.
3. Of these technologies, Object Storage is a major focus for DDN?
Yes, Object Storage is and will continue to be an area of focus from DDN. Through WOS, we have been able to address customers that require massive volumes of storage for Web/Clolud type applications and archive with all the characteristics that make Object Storage advantageous for those requirements. We are beginning to see requirements for even greater performance in the Object Storage market as applications taking advantage of REST-based interfaces continue to emerge and this will be part of DDN’s direction going forward.
4. Can you outline the key attributes/benefits of DDN’s WOS Object Storage, starting with scalability?
WOS was purpose built to scale much further than traditional data stores like scale-out NAS. DDN’s WOS presents a single scalable storage pool that seamlessly scales to trillions of stored objects and Exabytes of capacity.
However, achieving high scalability is much more than simply measuring object counts and data volume. Considerations such as object size, capacity limits, tiering and caching, metadata management and, as the object store grows, object access times all need to be addressed.
The last point is particularly important for building out object stores that will deliver access to many object store/retrieve requests in parallel, such as systems serving as the backend of a Content Delivery Network (CDN).
Of course, we shouldn’t forget that object stores may need to start small and not be required to have an initial footprint in the hundreds of Terabytes or Petabyte range. DDN provides the capability to have a small entry-level capability, which helps reduce the barriers to entry for object storage adoption, with the added requirement to be able to scale linearly from small to large with minimal operational impact.
5. And flexibility?
Object storage has been at the forefront of the move towards software-defined storage or SDS. The nature of large scale-out deployments has meant object stores work well with the cost model of commodity hardware and vendor-supplied software. As a result, we see many object storage implementations based on software only. As such, WOS is available for software-only deployments on pre-approved third-party systems.
The use of commodity hardware, of course, doesn’t suit all requirements. Many potential customers may be unwilling or unable to manage the process of sourcing and building a bespoke object storage solution, preferring instead to take a combined hardware and software solution from us.
As either software or delivered as a density optimised appliance, WOS provides full flexibility to build the right storage infrastructure for any mix of applications. Customers can tune their infrastructure to meet the requirements for their data and application needs
DDN also offers choice in terms of data protection. DDN is unique in the market in that it offers many different policies so that data protection can be tuned to application requirements. With whole object replication over multiple sites, local erasure coding and multi-site erasure coding, WOS is able to tune policies to the exact profile a customer demands.
6. And simplicity?
Object Storage was designed as a more scalable alternative to file storage solutions for simplified storage needs. File storage was designed for files that need to be modified or changed frequently. As such, file storage is complex to scale because of file system hierarchies and locking mechanisms, which were created to enable file modifications. This overhead drives up the management cost exponentially.
The simplicity of the WOS architecture allows organisations to start as small as a single WOS appliance in four standard rack units and scale in single-node increments. WOS can deliver up to a quarter-million drives in a solitary, shared namespace and provide a single view of files and objects, thus allowing it to provide high-performance storage for active archive and collaboration environments seamlessly.
7. Moving on to the open architecture?
Initial object stores were based on the HTTP(S) protocol, using REST-based API calls to store and retrieve data. The use of HTTP is flexible in that data can be accessed from anywhere on the network (either local or wide-area), however, applications have to be coded to use object stores, compared to accessing data stored in scale-out file systems. Extending protocol support means existing applications can be easily ported or amended to use object stores for their data.
WOS simplifies deployment including support for a broad set of plug-and-play data access protocols including S3, Swift, NFS, SMB, Spectrum Scale and Lustre using an embedded or highly scalable gateways. A REST API is also included for custom app integration.
8. And the ability to access 3rd party applications?
WOS integrates with 3rd party applications either via the S3 interface, or via the native REST. With S3 rapidly becoming the defacto standard interface for Object Storage, this has made it much easier to rapidly qualify and deploy 3rd party applications. DDN has been working for many years with 3rd party developers, and in the past, some chose to integrate directly with the native REST API as well.
9. Not forgetting security?
As with any data store, security is a key feature. In object stores, security features cover a number of aspects.
With the volume of data likely to be retained in an object store, multi-tenancy becomes very important. Business users (either separate departments in an organisation or separate organisations) want to know that their data is isolated from access by others. This means having separate security credentials and offering encryption keys per customer or object within a customer.
WOS is the only object storage platform that offers full flexibility in data protection schemes and enables performance optimisation to comply with data, application or SLA requirements.
WOS also encrypts all communication for the client into WOS and between WOS nodes.
10. And NOFS – what is this, and why is it important?!
WOS delivers up to 20 percent better disk efficiency and density over its closest competitor, and is 1.25x faster thanks to a No File System (NoFS) architecture. While most competitors have built their solution on top of a file system, the unique NoFS architecture of DDN’s WOS solutions level up the management and scalability TCO gains of object storage and offers hard cost savings versus competitive solutions, up to 99 percent efficiency, and significant operational cost savings in space, heating, cooling and administration.
WOS is a true object storage solution, enabled through DDN’s underlying NoFS architecture, which minimises disk operations with as little as a single-disk operation for reads, and two for writes (sharply contrasting the 8-10 I/O operations that POSIX file systems require which result in additional performance and network overhead).
11. And, finally, lower TCO?
WOS was designed as a single storage solution for all unstructured data needs, to easily and reliably store Petabytes of information at the lowest cost. DDN maximises storage efficiency through the NoFS architecture, which keeps the solution easy to manage at scale (one infrastructure). It is not unusual to find customers who manage 10s of PB with just one full-time employee. The TCO can be further optimised by leveraging the WOS Capacity nodes and ObjectAssure, which provides the highest durability, with the lowest overhead of any object storage solution on the market.
12. And how would you characterise DDN’s Object Storage solution when compared to other offerings in the market?
While other Object Storage companies have chased Enterprise storage requirements, which often don’t require quite the scale or aggregate performance, DDN has focused WOS on the needs of the most scalable customers. With individual customers managing more than 500 billion objects on a WOS cluster, DDN has developed the experience and technology to manage the most scalable requirements with ease. That includes making the Object Store very easy to consume, as well as seamless and near effortless to expand as required over time.
13. Let’s move on to the applications for which Object Storage is well suited, starting with enterprise collaboration?
WOS is the only platform that enables integration with (and federation of) parallel file systems, which allows organisations to store assets in a globally distributed storage cloud to enable collaboration between distributed teams and integrate with workflow suites or file sync and share clients. Enterprises, leading research institutions and universities around the world are leveraging WOS to build global collaboration libraries, enabling more efficient workflows and quicker times to results/discovery.
Additionally, there are more and more web based applications that require the scale and ease of management of Object Storage. Because these applications might start as a small test suite or POC and then need to grow into the multi-PB range, they need a storage system that can scale along with them
Finally, there are customers that really need a private cloud, for which Object Storage is very well suited. This cloud could house a variety of applications and have a diverse set of data protection or replication requirements for which something like WOS would be a strong match. Customers want to be able to put primary applications, archive and backup all on one type of flexible scalable space with minimal management overhead for allocation and provisioning.
14. And then there’s Private and Hybrid Cloud?
WOS has been deployed in Private, Hybrid and Public cloud use cases. DDN will continue to explore WOS to make public cloud more accessible to customers as we see that use case expanding over the next 3-5 years.
15. And active archive?
Many providers have been promoting disk storage solutions as an alternative to tape to build “Active” Archives, but few are able to provide the cost-efficiency that is required to build Petabyte-scale repositories. WOS enables organisations to monetise their data and build highly reliable, scale-out archive infrastructures, at the lowest TCO.
WOS provides instant access to all archived assets and integrates with popular archival platforms, such as: ASG® and iRODS®. DDN is a member of the Active Archive Alliance to continue thought leadership and integration points with object storage to the modern archive applications.
16. As well as global content distribution?
The unique latency-aware technology in WOS, combined with the flexibility to optimise for small and large file performance, make WOS the perfect CDN storage origin. DDN has engaged with several partners to build CDN architectures that scale to as many as 60 origin storage sites.
You can enable Video on Demand, Cloud DVR and other video streaming services for residential or corporate end users. WOS provides high-throughput, low latency video delivery streaming for geographically distributed viewers. WOS Video Streaming can be deployed as a custom solution (integrated with API’s or file system gateways) or as a pre-integrated solution using the technology from partners like Arris®, a global innovator in cable, video and broadband technology
17. And ‘good old’ BC/DR?
WOS was originally designed with business continuity in mind, and can easily fit into any IT organisations BC/DR plans. Even within the flexible policies for data a protection, customers can apply to right level of protection to meet their requirements, whether it is simple continuous access to data in the case of a single lost site via Global Object Assure erasure coding, or high performance BC/DR through the use of replication to ensure that applications can always seamlessly transition in the case of data unavailability, with the lowest possible latency.
18. Ending with file sync and share?
Automated Sync & Share applications enable users to securely upload documents to the cloud, synchronise files and mobile devices, and easily share information with others. This is one of the more popular applications that utilise WOS, leveraging the latency-aware and data placement capabilities that are unique to the platform. WOS Sync & Share comes as a pre-integrated partner solution, from companies like CTERA® and OwnCloud®.
19. Can you provide a customer success story?
Deluxe Entertainment Services Group is a leading provider of state-of-the-art services and technologies for the global digital media and entertainment industry. Deluxe provides the technology, talent and high-quality processes to assist a broad range of customers including major motion picture studios, television networks, cable companies, advertising agencies, production companies, independent distributors and content owners.
To better manage feature-film workflow across its global footprint, Deluxe Creative Services sought an improved architecture that would enable selectively replicating data globally for easy and fast access by remote users. Numerous challenges surfaced in meeting this objective, including varied filmmaking process needs and the massive amounts of data generated.
Typically, workflow data is ingested and stored on tier-one disk storage while proxies are created and archived on tape-based storage to be passed out to directors, producers, visual effects teams, marketing and a host of stakeholders for review and editing. As data doesn’t always come in the same way, Deluxe demands great flexibility in how it’s stored, accessed, shared and retained. Additionally, projects moving through post-production create massive storage requirements. Typical feature films produce 300-to-500TB of raw camera masters, with up to 50MB per frame, depending on format. The amount of content doubles with 3D shows and ongoing demand for ultra high-definition resolution means that data sizes continue to grow exponentially.
In evaluating its high-performance archiving requirements, Deluxe Creative Services saw object storage as a potential fit for handling all creative assets generated during post production. By design, object storage boasts a simpler architecture than its SAN and NAS counterparts, making it well suited for building scale-out platforms to support large volumes of this kind of unstructured data.
Following a successful proof of concept validating WOS, Deluxe began deploying the complementary solutions. In rolling out its integrated content repository system, Deluxe is finding it easier to keep production workflows on track, shorten production time, and avoid pitfalls in the filmmaking process. The integrated platform also delivers short-term storage with backup capacity and long-term archiving capabilities that support the preservation of data with its associated user metadata.
Additionally, WOS provides Deluxe with high-reliability storage as the platform offers a choice of data protection capabilities, including ObjectAssure™, which provides local, replicated and globally distributed erasure coding to safeguard data at multiple locations from site failures.
20. What one piece of advice would you give to someone looking at Object Storage for the first time?
Generally, we would advise customers to look at the exact requirements of the Object Storage initiative. We still encounter plenty of customers that say “I have an Object Storage initiative, but I still don’t know how I’m going to use it….”, or perhaps, “I’m looking to Object Storage to replace a small piece of my Enterprise IT.” For those customers, WOS might not be a good fit.
For customers that are looking to off-load vast amounts of data from their primary storage infrastructure, know that they have new applications that may be starting small today but have a growth path into the 10s or 100s of PBs, or are looking at ways to optimise the storage costs of the Data Intensive applications, WOS is going to be a much more likely fit. So for customers, it is important to consider which applications they will be using and how those applications will scale over time.
Every industry is feeling the drive to digitally transform. Research from analysts IDC found two thirds of CEOs in large organisations will place digital transformation at the heart of their corporate strategies by the end of 2017. However, simply moving digital transformation ‘up the agenda’ is easy, delivering it is another matter entirely.
By Maarten van Montfoort, Vice President Northwest Europe, COMPAREX.
CIOs, often in charge of already over-burdened IT departments, must lead the digital revolution – but this a considerable challenge. There is little surprise then, that a significant 84% of digital transformation projects fail. These failures are typically a combination of the following three factors.
1. Digital transformation seen as ‘too complex’
There are no discrete start and end-points to digital transformation. As an ongoing, changeable process, it is daunting to many CIOs. This means that all too often, digital transformation is perceived as too complex, and a project doesn’t even get started in the first place. Take cloud as an example; a business makes the decision it will move some workloads to the cloud. So far, so good. The CIO investigates the cloud services available, and how this will change their infrastructure. Slowly, the picture starts to look more complicated – worries emerge over how to integrate legacy IT applications, or how to control costs, and very often, over data security and sovereignty. Eventually, the migration to cloud is shelved.
While there is no doubt organisations’ IT infrastructures have become more complex in recent years, this complexity is not a valid excuse for delaying the adoption of new digital technologies. With robust planning and clear, expert guidance – no complexity is insurmountable. A mix of the right skills and knowledge will enable CIOs to plot an achievable roadmap whatever the level of maturity in the business. The central tenet should be that everything is within ‘the square of possibility’.
2. ‘Legacy thinking’ is holding transformation back
One of the greatest barriers to digital transformation is ‘old-fashioned’ thinking, which means that grassroots ideas are often killed in the weeds. Often, this results in businesses approaching digital initiatives from the IT department’s perspective, rather than that of end-users. A shift in attitude is needed. The organisation must understand that better digital services are fundamental to engaging with users – whether internal employees or external customers – so projects must keep their needs front of mind. In addition, this shift must embrace innovative thinking, and become less concerned with getting everything ‘right’ first time.
In any sector, the companies riding the technology ‘wave’ are the most successful. These are the organisations able to create and launch new products or services in short timeframes, thanks to their agility. By embracing speed – for example, employing quick testing and quick feedback phases – ideas that don’t work can be quickly discarded, and those with promise can be rapidly rolled out.
A conscious choice to be creative, be prepared to take a risk, and to water the seeds of grassroots ideas, will empower true digital transformation.
3. Ill-defined and misaligned projects
Technology is an intrinsic part of any organisation – think of how critical IT departments are to everyday operations. Every organisational and business process is now driven by, or supported by technology. What this should mean in practice, is that there is no divide between the business and the IT department – but often, this is not the case. There are some fundamental questions that any digital transformation undertaking must begin with – what are we trying to achieve, what is our strategy, and how does technology help us achieve this? Frequently, however, organisations begin projects before they have answers to these questions; resulting in ill-defined projects that fail to deliver tangible benefits.
Digital transformation has huge potential for improving the way that businesses interact with stakeholders, how their operations are run, and ultimately, on their bottom line. But to make this digital future a reality, these barriers must be overcome, and CIOs can make this happen. Firstly, by challenging the legacy mind-set that continues to hold enterprises back. Secondly, satisfying boardroom demands by improving the link between the goals of the business and the goals of IT. Thirdly, CIOs must be prepared to take risks – focusing on the digital destination, rather than becoming fixated on the journey of transformation.
It goes without saying that connectivity is fundamental to the IoT. But it doesn’t matter how good the network a device sits on is, if it becomes damaged and unable to transmit critical data. As the use of IoT devices becomes more widespread, their ability to repel the elements they are exposed to – specifically water and other liquids – will be crucial to the functioning of these networks.
By Nick Rimmer, VP – Product and Technology Strategies of P2i.
These devices will often be deployed in hidden environments, requiring minimal servicing and maintenance to remain cost effective. Reliability is more important in the IoT than perhaps in other consumer devices, given their requirements for continuous uptime over lengthy deployments in the field.
However, liquid exposure poses a huge risk to the IoT as it has the potential to cause damage and downtime within the network. As the IoT develops to encompass a broader range of devices, from home appliances such as fridges and ovens to more industrial grade sensors within wider networks, any downtime is going to have a significant impact on consumer use and RoI to business.
Outside environments such as humidity, rain, snow, and everything in between stand to disrupt the performance of the IoT. Whilst inside, the threat of splashes and spills caused by everyday living is no less apparent. Against the backdrop of continuous improvements being made across the mobile network, water resistance will be as vital to the success and development of this technology as its ability to connect to a network is.
The aesthetics of IoT devices, particularly those deployed in harsh outdoor environments, are less of a factor than it might be in the development of other smart devices, such as smartphones. Where appropriate, many will have built in ruggedised features you’d be more likely to see on a building site than in the hands of a consumer. But ruggedisation on its own is not necessarily the answer for these devices. As previously mentioned, some devices can spend a lot of time in isolation, so the requirement for minimal servicing is incumbent on them remaining cost effective.
This is where nanocoating technologies can come into their own. A ubiquitous solution that provides protection from everyday liquid exposure, the low-pressure deposition process treats the complete assembled device, coating it inside and out with a nano-scale monomer which is chemically bonded to the product surfaces.
This not only prevents corrosion caused by water ingress, it also discourages ingress by reducing wicking. The effect reduces costs incurred by manufacturers and consumers alike on repairs and replacements, as well as providing overall better reliability.
With dual protection, ruggedised features such as gaskets and seals that are backed up by water resistant nanocoating solutions can provide an added layer of protection against water ingress. Nanocoating of IoT products also has useful application in the home as a more cost effective way of protecting devices from your average daily splashes and spills.
P2i has a strong heritage in delivering water resistant nanocoating technology in the hearing aid, headset and smartphone markets, and will also add resilience and value to all devices and electronics connected to the IoT. This liquid repellent technology can also help with more continuous uptime, and better connectivity.
The pursuit of better connectivity for the IoT must couple improved network infrastructure with resilient hardware that can withstand the elements whether deployed inside or out. Without factoring in water resistance, we run the risk of high-tech devices not functioning because of the most low-tech of reasons, which will ultimately hold back the age of the IoT.
Automated endpoint and software asset management helps large cinema chain run smoothly.
“When you’re in a retail environment that’s as busy as the film industry, it’s reassuring to know that our Point of Sale (PoS) and other systems are all in good shape,” says Mike Rozwadowski, Architecture Manager at Vue Entertainment.
Vue Entertainment is part of Vue International, one of the world’s leading cinema operators. It has 86 cinemas in the UK and Ireland, and in leading positions in Germany, Poland, Italy and the Netherlands. As Rozwadowski explains: “We’ve been growing aggressively over the past ten years, and it’s important that we have the right tools to help the guys on the ground – those running the individual cinemas. Then, when they are really busy – showing a new blockbuster, for example – they can feel confident that their systems won’t let them down.”
Rozwadowski recalls that around 18 months ago he started looking for an endpoint and software asset management system. At the time the IT infrastructure for the entire business, including the 86 cinemas, was managed using Excel spreadsheets.
“If we wanted to check if our PoS hardware was suitable for future applications, we relied on the spreadsheet for this information, which took significant time to maintain. It was very inefficient and time-consuming,” he says.” We were always so busy with the day-to-day upkeep. By automating the majority of this allow process, we could concentrate more on overall strategy and forward-thinking projects.”
There was also no quick and easy reporting system, so decision-making and planning involved some guess work.
Rozwadowski had heard of Kaseya – but a timely call from one of its sales team convinced him that Kaseya’s VSA endpoint monitoring and management system would be worth a trial. “We kicked off using just the asset management functionality, but pretty quickly we realised that the product had a lot more to offer than we first thought,” he recalls.
Now the team uses VSA for almost any task that can be automated and controlled via a central dashboard including software deployments, running regular maintenance checks on tills, kiosk management, monitoring disk space and patch management. “Having a product which can detect a problem, log a ticket and then resolve it automatically is one of many powerful features under the hood,” says Rozwadowski.
Kaseya VSA is already being used across head office workstations and all the cinemas – on the tills, ATMs, retail systems, staff clocking in and on the managers’ PCs. Deployment to Vue’s data centres will soon be completed.
Vue outsources its first-line helpdesk for cinema staff to a third party provider who is now also in the process of adopting Kaseya VSA. The provider currently use Excel spreadsheets extensively, and so automation will transform their operations too.
“We’ve spent significant time embedding VSA here at head office so that we would be totally comfortable with the product. We dedicated the appropriate time to set up all the right policies and procedures to manage our environment. A key feature is VSA’s powerful remote control – for example, if we need to remote onto a till, a kiosk or a manager’s PC to fix something behind the scenes we use Kaseya Live Connect to do this. With Live Connect, we can easily troubleshoot issues without having to disrupt our end users. The modern UI and dashboard means we have the tools to detect and resolve issues much faster. We are in the process of passing this over to our third party helpdesk,” says Rozwadowski.
Now Kaseya VSA monitors and maintains Vue’s systems in a proactive way, delivering advance warnings of any risk of failure to enable remediation before users are impacted. For example, if the system detects something wrong with the PoS system, it will automatically run through a number of pre-defined scripts and policies to remove any possible errors, and allow IT admins to more quickly and effectively pinpoint the root cause of problems.
Typically, in the past something needed to stop working before it was fixed. Now with the help of VSA, Vue takes a proactive approach to IT maintenance and management. For instance, when disk space gets to a certain percentage, an email goes out to alert the team to the problem. Based on predefined procedures, VSA will resolve the issue by emptying a cache, for example, and then automatically email the team again saying, “don’t worry, it’s been solved.”
“We are able to see the current status of all our devices both hardware and software. Automation has not only saved us time, but helps us utilise our resources more productively. Upgrading one of our key applications within the business used to be done manually with someone connecting to each cinema remotely and running a series of packages. With the automation in VSA, we have now reduced a 2-3 month rollout plan to a matter of weeks. We now have more time to focus on impactful projects and be forward thinking, rather than constantly playing catch up with day-to-day operations,” says Rozwadowski.
He also appreciates how easy it is to run customised reports, if, for example, the company wants to quickly find out which devices still run Windows XP, what their hardware configuration looks like or what is their patch status. “We can go back to the business with accurate information to help form their decisions – how many PoS do we need to upgrade? How many devices are at end of life? Which servers are at the end of their warranty? In minutes, we have the reports we need at the ready to effectively assess our business needs. Previously this would be carried out by pulling data from a variety of sources, which would take time to collate.”
Above all, he says that automation gives him: “Peace of mind… it’s all about clearing out the old files and defragging the hard disks so that everything is up-to-date, tidy and ready for when you do have those large bursts of volume sales.”
He adds that Vue is now trialling Kaseya VSA in the Netherlands with a view to rolling it out later in the year.
Talking to Rozwadowski, it’s clear that Kaseya VSA has made a significant difference to Vue. “It has so much functionality in one solution with an intuitive, centralised console that allows us to control all of our IT. Whether it’s using the remote control function to troubleshoot issues, writing scripts and procedures to automate time consuming tasks, or simply installing Google Chrome onto a laptop, VSA provides a robust set of features that enable us to stay on top of our IT management needs in the fast paced world of retail. All in all, it’s pretty exciting to use,” he concludes.
Kaseya customer
Vue Entertainment - Cinemas/retail
www.myvue.com
Business challenge
Solution
Kaseya VSA
Benefits
Call Out
“We are able to see the current status of all our devices both hardware and software. Automation has not only saved us time, but helps us utilise our resources more productively. Upgrading one of our key applications within the business used to be done manually with someone connecting to each cinema remotely and running a series of packages. With the automation in VSA, we have now reduced a 2-3 month rollout plan to a matter of weeks.”
Mike Rozwadowski, Architecture Manager, Vue Entertainment
Throughout the global economy and across all industries, companies are re-inventing themselves to become better at sensing the next big thing their customers need, and finding ways to deliver it to get ahead of the competition.
By Mike Adcock, CA Technologies.
The concept of DevOps dates back nearly 10 years now. During this time, a lot has changed. As DevOps has matured, we have seen many successful implementations, lessons learnt and copious amounts of data gathered. One thing that remains unchanged to this day – DevOps is motivated by business results, without which there would be no reason to take risks. Typically, organisations are driven to improvement through one or more of the following four areas: time to market, quality in the improved user experience, efficiency or compliance. However, to achieve any of these objectives, DevOps requires changes to culture, process and tools.
As consumers have demanded more online services, DevOps has become incremental to the digital transformation some companies have embarked on to stay competitive. For example, during the time that Netflix evolved its business model from DVD rentals to producing its own shows and delivering a video on demand service, a lack of commercial tools to support its huge cloud infrastructure saw it turn to open-source solutions for help. It was here that the Simian Army was created, a suite of tools that stress tested Netflix’s infrastructure so that its IT team could proactively identify and resolve vulnerabilities before users were affected.
A global study from Freeform Dynamics and CA Technologies reveals the benefits and drivers for DevOps implementation, and highlights how culture, process and technology must come together to enable DevOps success. In EMEA, IT decision-makers surveyed saw a 129 per cent improvement in overall software delivery performance when practicing cloud and DevOps together. This was over an improvement of just 81 per cent when practicing DevOps alone and 67 per cent when leveraging cloud without DevOps.
Not only have organisations experienced 99 per cent better predictability of software performance, by combining DevOps with cloud-based tools, it has resulted in a 108 per cent improvement in customer experience over traditional software development and delivery models. A streamlined online customer experience is in high demand, and respondents cited a faster software delivery speed of 2.6 times – plus more than three times better cost control for the tools and services that DevOps teams actually use.
It is clear that modern development and delivery must be supported with DevOps. The following five components are essential in enabling companies to leverage new software to meet customer needs in any deployment:
Agile management: New capabilities bridge the gap between employee autonomy and company strategy with an unprecedented level of process flexibility, supporting organisational methodologies (such as Scrum and Kanban) at the team level. It helps ensure visibility, and alignment to corporate strategy and direction.
API management: Application programming interfaces (APIs) are the unsung heroes of the application economy. Many of the world’s leading applications would not exist without them. APIs are sets of defined rules that govern how one application can talk to another, providing ready-made, universal access to whatever functionality an organisation needs to deliver.
Analytics: Analytics are necessary to provide visibility into time spent at each step in the software development lifecycle (SDLC) to enable faster software delivery. They also provide a holistic view of orchestration across the entire software delivery chain – integrating planning tools, agile management solutions, performance testing tools with release automation, operations and application testing solutions. Analytics solutions also correlate end-user, application and infrastructure monitoring to deliver business and operational insights necessary to improve digital experiences.
Integration of mainframe operations and automation tools: These allow organisations to leverage machine learning for operational intelligence and real-time dynamic thresholds. These solutions proactively detect performance anomalies sooner, and automate corrective action that prevents outages and slowdowns of mission essential systems.
DevSecOps: The entire software lifecycle is incorporated with security through DevSecOps. By detecting and addressing security defects throughout the development process, companies can reduce the risk of the most common source of breaches: attacks on the application layer.
In the current environment, being built to succeed means being built to change. Innovations supporting microservices and container-based architecture are driving overall modernisation, with technologies such as machine learning and advanced analytics. Today, traditional software development proves obsolete versus cloud, DevOps, or – ideally – both combined. Together, cloud and DevOps are fuelling the modern software factory revolution.
DW talks to Stanimir Markov, CEO at Runecast, about the company’s VMware management solution and its technology and market expansion plans.
1. Please can you provide a little bit of background on Runecast – when and why formed and the like?
Runecast was founded in October 2014, and we shipped our first product September 2015. My co-founders and myself had been working in IBM’s VMware Centre of Excellence and realised the need for an automated solution to trouble shoot VMware virtual infrastructure without having to scroll through copious Knowledge Base (KB) articles. So we decided to create a software remedy to this problem, Runecast Analyzer.
Runecast Analyzer is a proactive VMware management solution that uses current VMware KB articles and Runecast expertise to analyse virtual infrastructure and expose potential issues and best practice violations, before they cause major outages.
2. And who are the key personnel involved?
I’m the CEO, Aylin Sali is our CTO, Constantin Ivanov and Ionut Radu lead R&D and Ched Smokovic heads up sales. All of us were colleagues at IBM and are all well versed in VMware virtual infrastructure. I myself hold the highest level of VMware accreditation, VCDX (VMware Certified Design Expert) and am VCDX number 74, plus I’m an IBM Red Book author. My colleagues are all either VMware Advanced Certified Professionals (VCAPs) and/or VMware vExperts.
3. And what have been the key company milestones to date?
After our pre-seed funding in 2014, we created a beta program for Runecast Analyzer in June 2015 and shipped our first 1.0 product in September 2015, while attending VMworld in San Francisco and Barcelona. By the first quarter of 2016, we were operating on 3 continents and by the third quarter of 2016 we had broken even as a company. In the fourth quarter of 2016, we received further investment to continue our rapid expansion. We now operate on 6 continents, with resellers in all major countries and have increased our R&D, sales and marketing teams globally.
4. Can you provide a brief overview of the company’s product – the Runecast Analyzer?
Runecast Analyzer is a proactive VMware vSphere management solution that installs as an OVA format virtual appliance. Runecast Analyzer uses current VMware Knowledge Base articles and internal Runecast expertise to analyse the virtual infrastructure and expose potential issues and best practice violations, before they cause major outages.
5. In more detail, can you describe the Knowledge Base function?
The VMware knowledge base contains the most current and complete information about known issues in VMware. The KB configuration analysis with Runecast Analyzer knows your specific configuration and uses the wealth of available knowledge and our extensive expertise to identify and expose potential issues before they impact customers’ business. Runecast Analyzer shows you the root case and guides you to the resolution before the problem even happens.
6. And the compliance checks?
Healthchecks are based on standard VMware Best Practices, the vSphere Security Hardening Guide and other security standards (currently DISA STIG and soon PCI-DSS and HIPAA). Runecast Analyzer runs securely on premises, so no data leaves the data centre.
7. And the log analysis capability?
The log analysis correlates log entries from the hosts, VMs and vCenters to known and documented issues to easily identify the root cause of operational issues. Instead of providing you only search functionalities (where you have to know what you are searching for), Runecast Analyzer searches in real-time for problematic log patterns that link to KB articles. It provides you with the root cause and resolution of the current problem.
8. And any other features of Runecast Analyzer?
Verbose dashboards provide an easy and efficient way to navigate through the important part of the vSphere logs. In case the issue you are experiencing is not yet a known issue documented in the VMware KB, the verbose dashboards can help you find the root cause within a few clicks. It filters the incoming logs based on important keywords (such as Error, UnableTo, FailedTo, NMP, etc) and graphically represents the number of those log entries at any given time. Additionally, you can add search filters to narrow down the search for the root cause.
Runecast Analyzer can be integrated with the rest of your system management – through full REST API, vCenter Web Client plug-in, vRO plug-in and SMTP alerts, LDAP integration.
You can use granular filters to exclude certain checks from any part of your vSphere environment and customise the reported analysis.
Through continuous online or offline updates, you can ensure that you are protected against latest known issues.
9. Can you detail some of the real life problems that end users experience that Runecast Analyzer is designed to address?
The VMware Knowledge Base contains over 30,000 KB articles and some portion of them detail known issues related to the VMware products. Many of them can be prevented, if you detect the combination of conditions described in the respective KB article and mitigate them before the problem strikes.
Here is a small example of real life issues that we experienced in the past and can be prevented by Runecast:
KB1004424
Issue: Insufficient VMFS heap size on ESXi hosts caused a failure of VMware HA. Every ESX/ESXi version has a default VMFS heap size that can serve a limited amount of accessed VM storage.
Impact: HA did not restart failed VMs. While trying to troubleshoot the problem, the customer restarted all ESXi hosts and extended the full service outage.
KB2133118
Issue: Virtual machine fails unexpectedly after a snapshot consolidation task on ESXi 5.5 U3.
Impact: Multiple VMs failed during a regular backup operation using Veeam Backup & Replication. Major service outage and operational overhead.
KB2144968
Issue: PSOD with latest ESXi 6.0 U2 when hardware LRO is enabled and VM hardware version 11.10. And are there one or two customer success stories you can share with us?
We’re very proud to have global brands such as Verizon Wireless and Fujisoft as customers. The German Aerospace, Pacific Seafood and Indonesian Cloud also rely on Runecast Analyzer to keep their virtual infrastructure performing optimally. We’ve published many customer case studies that can be found on our website, but all of them have the same underlying message: We keep their data centres more secure, we reduce risks and save them multiple thousands of dollars per year in time and external resource costs.
11. Can you summarise the business benefits of the Runecast Analyzer, starting with the need for less troubleshooting?
For all our customers we save them that precious commodity of time, as they spend less time trouble shooting with Runecast Analyzer. By discovering and remediating potential issues before they cause outages, we significantly increase their uptime. With Runecast Analyzer, our customers minimise their risk with continuous compliance as their environment follows industry best practices and is configured in the most optimal way. All of this naturally saves money too and avoids potential audit penalties.
12. Moving on to increased uptime and security?
Availability and security are two of the most important IT infrastructure qualities that directly impact the service and business. Any downtime can lead to serious negative business impact. The vSphere layer is critical and usually hosts core IT workloads that need to be up and running at all times. It is humanly impossible for an IT admin to keep track of all known issues documented by VMware, let alone manually check all settings continuously and ensure their environment is not affected. That’s why, prior to Runecast, many of those known issues were close to inevitable and were dealt with only once an outage already occurred and the business was impacted.
Security is another important topic that gains more and more popularity lately and any gaps there can hurt any business. As organisations adopt various security standards, those standards must be followed continuously and enough IT staff must be assigned to ensure compliance. Considering the increasing size of the IT infrastructures and security considerations, the manual security health checks become unmanageable and ineffective. Runecast Analyzer automates the security checks and gives peace of mind. In addition, the security standards - there are serious security bugs emerging from time to time. The Hearbleed bug is just one of those examples. It takes too long for organisations to detect such risks and ensure they are protected.
13. And continuous compliance?
Runecast Analyzer offers the most up-to-date industry best practices and ensures that any new documented issues and lessons learned are checked on customers’ environments. The virtual infrastructure is analysed in minutes, with a report presented on the state of the environment.
The log analysis is in real-time and the configuration analysis can be scheduled automatically. With the continuous analysis, combined with regular update of the latest known issues and best practices definitions, Runecast Analyzer brings continuous compliance for vSphere.
14. Please can you tell us about the work Runecast does with its key partners?
Our key technology partner is, of course, VMware. We are a VMware Ready ISV and work closely with VMware technical and sales teams across the globe. Working with VMware, we are one of the first ones to certify a vCenter HTML5 Web Client plugin and we are finalizing testing and certification of Runecast Analyzer on VMware’s VMC on AWS offering.
15. Similarly, can you tell us about Runecast’s routes to market – presumably through the Channel?
We do work with channel partners, absolutely. But we know that markets have to be created by vendors these days; channel partners have so many products in their portfolio they are only inclined to take on new, innovative solutions once a market has been identified and proven. Therefore, it depends to a certain extent on the region, but once we have partners on board, we then only sell through the channel.
16. And what plans do you have to develop this coverage over time?
Our expansion plans include continuing to increase our own footprint but also to expand our channel, a leveraged sales and marketing model is the best way to scale in our experience. We also see the opportunity for potential OEM relationships and for cloud providers utilising VMware technology to embed our solution into their offering to ensure SLAs are met.
17. What can we expect from Runecast over the next 12-18 months in terms of technology development?
Our near and mid-term focus remains on VMware, we will be adding VSAN and NSX to our capability by the end of this year to complement our vSphere solution. However, the way in which our algorithms work means that Runecast Analyzer can be adapted and relevant for many other platforms. By combining machine learning and our own expertise, we will provide auto-remediation to assist IT to make only informed decisions. Our vision is to become an "augmented IT" company – revolutionising IT Operations by leveraging the available IT knowledge and documentation
18. And, more generally, in terms of company growth (turnover/personnel etc.)?
We are constantly hiring! As a private company we don’t disclose turnover, but I’m proud to say that for a start-up of our size to already be profitable is a great testament to our team and our solutions’ market acceptance. We’re continually increasing our R&D teams and also adding to our field sales personnel. I envision our growth trajectory to continue to be very aggressive as our total addressable market (TAM) consists of over 500,000 VMware customers!
19. Any other comments?
Runecast the company and Runecast Analyzer the product have made great strides in a relatively short space of time. With the help of our investors, partners and employees, we will continue to develop solutions that make IT operations run smoothly.
The biggest disrupter of established broker industries in the last decade is without a doubt blockchain technology. Originally developed as the technology behind cryptocurrencies like Ethereum and Bitcoin, investment in blockchain has increased exponentially over the last few years. To date, this investment has been focused on blockchain start-ups targeted at disrupting the financial services industry.
By Gary McKay, Managing Director and Founder of APPII.
The global finance industry is the largest and most lucrative trusted broker service in the world. Every day, trillions of pounds are transferred by countless individuals and organisations who rely on financial service organisations to conduct their business. Heavily reliant on manual process, but hiding behind a digital disguise, productivity in the finance industry is fraught with rising expenses, process delays and is a constant target for cybercrime.
To tackle this myriad of issues, many have turned to blockchain technology. It is this attention that has made many believe that the technology’s only real potential lies within the world of finance.
Yet the finance industry isn’t the only trusted broker industry, where two parties wanting to exchange something of value are reliant on a third-party broker to enable the transaction. In broker industries, such as real-estate, recruitment and art, what could be a straightforward two-party exchange, becomes a convoluted three-or-more party transaction – leading to all the common costs and delays we’re all too well aware of.
As a vast, globally distributed ledger that runs on millions of devices and is able to record anything of value, blockchain has the potential to dramatically improve efficiency and security in all established broker industries.
One of the most significant ways blockchain will impact all trusted broker industries is by moving where the value lies. Currently, value is centralised, held by those companies that service the industries. However, as a decentralised technology, blockchain will shift the value from today’s centralised broker model towards the edges.
The digital revolution has already caused this to happen in a number of industries – the advertising industry for one. The US print advertising market was worth $16bn in the early 2000s, but due to the boom in digital advertising that was faster to execute and lower to produce, the market shrank to $5bn by 2009. Value still existed in advertising; however, the industry was seriously streamlined which led to $11bn in turnover vanishing. This will be the same story for all global broker markets that can be digitised. It’s true that profit and turnover will be reduced, but there will be more value than ever before for its participants – this will drive business and economic growth.
One broker market that will be significantly improved by integrating blockchain technology is recruitment. Globally, recruitment is worth £320bn and the UK industry alone is worth around £35bn. In 2016, the global verification sector was valued at $8.7bn, and this is predicted to rise to $9.7bn by 2021. Yet recruitment processes are onerous and highly inefficient, with long and costly onboarding and verification processes.
Recruiters who integrate blockchain technology, such as APPII or Technojobs, will be able to offer candidates verified career profiles for the first time in the history of the industry. Every assertion added to the profile, such as previous job roles and experience, will not only be stored securely on the blockchain, but also automatically verified by the companies they relate to. This will significantly streamline the process, reducing time and money spent checking candidates are who they say they are, and done what they have said they have done.
These ‘Intelligent Profiles’, or active, verified CVs will be easy to update for candidates, meaning it will be easier to search and apply for dream jobs – ultimately reducing the existing friction found in recruitment. Candidates will also enter the hiring cycle with the knowledge that their CVs are viewed with trust from the beginning.
For both employers and recruiters, Intelligent Profiles will make it much easier and quicker to find the best candidates as they will already be verified, increasing productivity for those both in-house and agency-side. Importantly, verifying identity is essential for ensuring personal information refers to the correct person – especially when adding additional data to the Intelligent Profiles. Platforms, like APPII, need to introduce biometric identification to ensure identity fraud cannot be committed. Harnessing blockchain in the recruitment sector will increase value for all and ultimately improve overall success rate for the industry.
Blockchain’s potential has yet to be truly realised, particularly in industries outside the world of finance. However, it has the ability to underpin huge positive disruption within almost every single trusted broker industry. While the media may continue to focus only on financial applications of blockchain, we must start to think about how the technology can be harnessed across other industries as well.
Craig Walker, Director Cloud Services at Alcatel-Lucent Enterprise, explains that, although it may be a long time before we see the full likenesses of “HAL” from “2001: A Space Odyssey”, the technology is already here that can improve the ways businesses operate.
With the wave of personal assistants, such as Siri, Cortana and Google Assistant, and new start-ups leveraging AI and analytics to build personal companions, it’s becoming clear we are moving toward a new voice-controlled relationship with technology. As we have already seen in the consumer market, it is all but a given that these voice-activation systems will eventually make it into the enterprise environment, as the potential benefits of these systems could be tremendous in simplifying and automating activities.
So, where are we on the road to voice-first? Voice analytics firm, VoiceLabs, has provided a view on the various layers needed to support a voice-first approach in the consumer world. However, to move from the simple consumer-based use cases to providing a more voice-first environment in the enterprise world, a few things will need to happen.
Security will be critical if we are to start having our enterprise systems relying on voice commands – should anyone be able to command critical equipment or systems just by speaking? The answer, clearly, is no. Privacy too is a top concern, and while the physician example above seems simple enough, we need to think about this in context of regulations. Are a patient’s rights – as per HIPAA regulations in the US – violated if these verbal commands expose the patient’s medical information to third parties?
We are already seeing the next step of voice recognition systems where the technology is able to support secure access.
Banks are among those introducing voice authentication to their telephone banking systems. While this may leave some customers a little concerned over the security of their account, my feeling is that it will follow the adoption cycle we saw in e-Commerce where the initial concerns for credit card fraud needed to be overcome before we saw the meteoric rise in online purchasing.
We will continue to see continued innovation in voice recognition systems and improvements that will enable voice system security to be viable in an enterprise environment and ensure that only authorised users with the right privileges can perform the associated actions.
And whereas your microwave might not be spying on you, some devices will be always-on, always listening and potentially recording. A few well publicised cases of privacy invasion, commercial espionage or legal jeopardy could stall adoption. This suggests that a big On/Off switch or function needs to be included in voice-first products, so that users may get the benefits without risking the downsides of constant monitoring. Secure software access would also need to be in place in the products to prevent and detect hacking efforts.
The first use cases are primarily around voice response systems – whether from a call centre perspective or those implemented in our cars and smartphones. But as many of us know from firsthand experience, this works marginally at best. Recognition and contextualisation need to be refined through technological developments before we can realistically think about enterprise-wide adoption.
Research programmes such as Carnegie-Mellon University’s Sphinx project continue to enhance language recognition capabilities. An Internet Trends report by Mary Meeker indicated that in 2016, Google’s voice recognition system could recognise over five million words with around 90 per cent accuracy – but that’s still not extensive or accurate enough. Is 90 per cent accuracy good enough to interact with a life support system in a hospital or a utility provider’s network?
It’s not just about recognition of the words either, it is about what to do with the words. Here is where cognitive engines and AI come into play. Some of the biggest players in the industry – for example Microsoft, with its open source cognitive recognition engine – can be leveraged to understand the context of the words. “How do I get to Green Park?” may sound simple enough, but it needs to be put into context. Location awareness could indicate you likely mean Green Park in London and assumptions about transportation mode. If you were sitting at Piccadilly Circus, the answer could be, “Take one stop, Westbound, on the Piccadilly line.”, but here we assumed it was Green Park in London and not Green Park in Manchester or Birmingham.
The real challenge comes in what is behind the voice recognition systems – both from the integration of the IoT devices to the system itself, and ensuring the commands requested make sense. Here, we need to further leverage those cognitive engines as a check and validation system. Think of someone accidentally giving a command to “Turn off cooling system to reactor 4” instead of reactor 3 - which has already been shut down, or a doctor using the system to prescribe a harmful dose of medication because he accidently said 400 grams instead of 400 milligrams. These might be extreme examples, but there will need to be a holistic view of the actions that are being automated to prevent human error and bring in broader intelligence to understand the actions related to voice-controlled requests. For example, maybe “Turn off cooling system to reactor 4” was correct, but the system would then need to understand the set of operational procedures to implement those actions.
An interesting element that could tie in strategically with the development of true voice-controlled enterprise environments comes from the innovations occurring in the traditional voice communication world. We are seeing the explosion of CPaaS (Communication Platform as-a-Service) in the enterprise, leveraging APIs to transform today’s applications into voice-integrated solutions. Some of the major voice communication vendors are now entering this market, providing CPaaS infrastructures with a standardised set of APIs to enable companies to integrate communications into their business processes.
While we traditionally look at integration as things like incorporating voice and video services into existing applications – think of a banking application that allows you to move from an online application to a voice call with your banking advisor – I believe these will play a big part in that “voice-first” environment by leveraging the rich API infrastructure of CPaaS to communicate to applications and things.
Behind the communications infrastructure requirements, just how CPaaS or other platforms communicate with devices really needs to be standardised before we will see rapid development of voice technology. Each of today’s consumer-based voice-controlled systems have their own interfaces, own API integrations and, as with the historic “Beta vs. VHS” battle from decades ago, may lead to product obsolescence. Just as a consumer doesn’t want to invest in the latest “smart coffee maker” only to find that the platform that controls it was just discontinued, an enterprise wants to ensure that investments they make into new technologies won’t be obsolete before they are able to realise a return.
The good news is there are a set of technologies in the works to help minimise potential obsolescence. Frameworks like IoTivity are being developed to build a standardised platform. We are already seeing the value, benefits and rapid expansion of new voice applications for consumers. In the near term, we will see some of the basic use cases move into the enterprise. Longer term, as advances continue to be made in voice recognition, voice security and simplification/standardisation in device connectivity, we will see more and more voice-first activities in both the consumer and enterprise world to help reduce complexity and improve our productivity.
What’s the impact on the IT department when a new employee joins the organisation? Usually not much. Typically IT will assign a device and ensure it’s adequately protected, but research from CORETX suggests that rarely does it go much further than that. In a survey of 100 UK IT managers, only 11% said that they have someone dedicated to device lifecycle management, highlighting how vulnerable organisations are and that many may be taking the wrong approach to protecting devices.
By Peter Low, director of field and lifecycle services at CORETX.
With so few organisations proactively managing a device throughout its entire lifecycle it’s no surprise that over a third of organisations (35%) don’t know where their company-owned devices are at any given time; or even what information the device contains. Lacking insight into your devices not only makes your organisation vulnerable to security risks, but it can also mean that your devices aren’t achieving full ROI over its lifetime.
The answer to the device management challenge lies not in seeing the lifecycle of your devices as only taking place during the duration of your staff’s employment. Rather the IT team needs to broaden its perspective and understand the different stages of a device’s lifecycle. With the right device strategy in place, organisations can identify requirements, such as device capabilities, from the start. Only then can it identify the need to take action when a device’s performance or security is below par.
Over a third of organisations don’t provision devices until the day a new employee starts, according to our research. It’s likely that those who leave it to the last minute don’t have or indeed follow a device strategy. Without one, you’re not giving full consideration to the employee’s role and requirements, nor are you able to identify the best technology to support them or help the organisation meet its wider objectives.
A ‘one size fits all’ approach to devices might not be enough. Establishing each employee’s technology needs alongside company objectives, now and in the future, can support a strong case for ROI. An effective strategy also means that you can optimise the procurement of devices through forward planning. This includes ensuring effective terms, and determining whether you require commodity or bespoke purchasing.
With the right plan in place for hiring staff and assigning devices, the IT team has enough time and information to provide devices to employees and ensure the correct licensing and security apps are applied. As devices are an organisation’s network endpoint, they can be an entry point for cyber-attackers, highlighting the need for physical security through passwords and two-factor authentication, if possible.
When deploying a device, it’s important to keep accurate records of its location, so start as you mean to go on. We found that 20% of organisations have a paper-based system for recording where devices are stored and 5% have no system at all. It’s essential to have clear and robust records from when you buy a device to the end of its life. Ensure that records cover who has the device, what’s loaded on it, the permissions of the individual it’s been assigned to, as well as the licenses used. This makes it easier to retrieve them when maintenance needs to be carried out.
Once the device is up and running, the employee now has control of its use. According to our research 40% of companies have found sensitive personal information on company devices, and 37% have found nudity or pornography. User training, which reinforces employee care for their devices, is essential so that employees understand the implications and security risks of keeping personal information on their phones and laptops. They should also know the repercussions of downloading unapproved apps and materials.
As employment progresses, devices begin to store more data, often resulting in the decline of performance efficiency. Discovery and management tools can be used to track and monitor devices remotely as they reach this point. Having a process in place for when devices are under performing can help ensure productivity of the device or employee isn’t impacted.
It’s a similar case for security. Ensure you’re constantly able to actively monitor devices and how they’re protected, whether that’s through anti-virus protection, malware prevention, authentication and password policy enforcement, or disaster recovery processes. Ensure that your security platform gives you the right balance of protecting the device from external threats while still giving employees the freedom they require.
It isn’t just cyber-attacks that put devices at risk - the mismanagement of licensing can be as big a threat to the organisation. Given that only 13% of IT managers rate device software management (including licensing) as the most challenging aspect of managing devices, many are undervaluing the importance of an effective licencing policy.
Monitoring should include license renewals, transferring licenses to different devices, and reviewing employees’ access rights to different applications. Reviewing licenses across all devices on a regular basis should be part of your maintenance planning to stay compliant with legal requirements and avoid fines.
No matter how long the user has been employed, once a device has reached the end of its life, the IT team must decide whether the device should be redeployed or retired. 38% of IT managers say that one of the biggest challenges of redeployment is not being able to transfer data across devices. Over a third say they don’t know where devices are and another third don’t know what software is on the device to be able to transfer it. All this information needs to be recorded at the deployment stage and updated when the device reaches its end of life. This information aids the planning and logistics that is required for redeployment.
If the IT team concludes that a device has reached the end of its life, they must follow a protocol that makes the process secure. The estimated 49.8m tonnes of e-waste that will have been generated by 2018 has led to strict directives across the world on device disposal. But a third of IT managers don’t know their legal obligations with regards to disposal. Organisations should look at their responsibilities in recycling or selling wiped devices when possible to reduce waste and remain compliant.
Given that failure to deploy a device management strategy can mean legal penalties, loss of productivity, or theft of data, the risks are too severe not to. From when an employee accepts a job offer to when they leave, an organisation’s device strategy must apply; simply waiting until the employee leaves before you establish the state of their device poses a significant security risk. As 32% of organisations say they don’t have enough resource in the workforce to manage devices, it can be a drain on the IT team to manage a device’s full lifecycle. Consider using an IT MSP who can create and implement a strategy for you and at the same time keep your network’s endpoints performing and secure.
With the number of connected devices expected to hit 50 billion by 2020, chances are if your business isn’t using this next generation of connected intelligent devices (collectively known as the Internet of Things of IoT) today, it soon will be – and so will your suppliers and customers. This rapid rise of the IoT offers vast opportunities for organisations of all sizes and in every sector to improve internal efficiencies, serve customers better, enter new markets or build new business models.
By Hubert da Costa, VP EMEA, Cradlepoint.
More IoT devices are coming online each day, and everyday necessities such as our fridges, kettles and cars are evolving fast. Even the way we manage our homes and offices and receive healthcare is changing. Meanwhile, in the world of business, companies are using connected devices to create more efficient production systems and supply chains. From tracking shipments across the world to monitoring gas and oil pumps in the field, even the farming industry is digitally tracking livestock to better monitor location, health and behaviours. But while the IoT offers significant opportunities for the enterprise, it also poses a variety of risks that need to be addressed fast.
There are several challenges to securing the IoT, chief among them is the fact that IoT scalability and security appear to be at odds. IoT sensor devices are, by their very nature, resource-constrained, containing very little processing power or computing capability. They are designed to perform their exact function and nothing more. This simplicity is necessary to scale IoT systems and keep costs down. However, it also is impossible to achieve the enterprise-quality hardening of systems for security that one would normally expect to see in enterprise-class technologies.
Another challenge is ensuring the security of the WANs used with IoT deployments. The Mirai malware attack, discovered in 2016, is an example of the potential scale of a WAN security breach. The Mirai attack affected networked devices — primarily IP cameras running Linux. More than 100 countries were affected, and security expert Brian Krebs places the size of the attack at a record-setting 620 Gbps. Many blue-chip companies were impacted, and variants of the Mirai botnet were used to attack Liberia’s entire telecommunications infrastructure.
From the app to the sensor, data transmission is at the core of IoT. At last data generated remotely by wearables, sensors, smart devices and other monitoring systems can now be gathered, analysed and responded to in real-time.
This is ushering in an era that extends interoperability beyond the walls of the traditional enterprise to link customers, suppliers and partners in ways that make entirely new business models possible.
This monumental shift to a brand new data-centric world means companies will need to consider how they restructure networks to cope with the growing influx of data streams and workloads.
But that’s not all; enterprise networks will also need to be adjusted to cope with the fact that the connected organisation’s most important network activity will increasingly take place at the Network Edge.
When it comes to performing powerful real-time analytics, organisations will need to seek out efficient and effective ways to conduct data analytics and manage machine intelligence closer to the Network Edge – which is where data generated by IoT devices is captured.
The benefits of this approach are twofold. Firstly, processing closer to the network Edge significantly reduces opportunities for data to be compromised. Secondly, processing big workloads at the network’s Edge reduces the volume of data that needs to be backhauled to enterprise.
Revisiting legacy network architectures not only makes it possible for enterprises to take digital transformation to the next level; for example, leveraging cloud apps that make it easier to interpret and act on big data. It also paves the way for ensuring that traffic can be processed, relayed and acted upon without leaving its original location.
With IoT deployments on the up, enterprises will face some critical decisions around whether it’s better to go it alone and invest considerable time developing IoT projects in-house, or purchase a ready-made IoT solution.
It’s not an easy choice but with the IoT expanding so rapidly businesses can’t afford to rely on ad hoc solutions that were never designed with longevity, scalability or ROI in mind.
Using a ready-built out IoT platform will make it easier to get actionable value sooner. But given the complexities that surround areas like data processing, analytics, interoperability and security, most organisations will need to work with an experienced partner that is able to provide solutions engineered with the realities of today’s enterprise networks in mind.
When evaluating potential partners to assist with the implementation of IoT projects, enterprises will need to assess if a provider’s solution simplifies the deployment and management of IoT technologies. They must ensure that the IoT does not pose a business critical security risk or is a legacy solution marketed as enabling IoT projects. Whatever businesses choose, it must enable their infrastructure to be able to cope with the massive data loads associated with the IoT.
Technical innovations and increasing digitisation are a mixed blessing: On the one hand, we benefit from them as they simplify our everyday life and can help us to overcome challenges. On the other hand, they present new difficulties and problems. The concept of the smart city, which has been under development around the world for some years now, is a perfect example of this.
By Mirko Brandner, Technical Manager, Arxan Technologies.
Whether it’s growing traffic volumes, environmental pollution, dissipation of energy, or growing mountains of waste – the smart city of the future has the answer for a number of problems faced by our cities. The answer being, the internet of things, i.e. millions of connected, digitised and sensor equipped devices and infrastructures. From connected automobiles within a car sharing service, smart traffic light circuits and energy-efficient street lighting, to sensor equipped garbage cans or irrigation systems in parks – everything is possible.
But environmental compatibility, comfort, and resource efficiency do not come without their challenges. Not only is it difficult to cater for the immense amount of data and rapid analysis that comes with smart cities, but even more concerning is the susceptibility of smart cities to cyber-attacks. Something all security experts agree on is that the smart city of the future is insecure.
One of the greatest weaknesses of IoT is the utilisation of insecure devices that lack sufficient security testing, allowing the devices to be hacked and fake data to be fed to them. The reason this happens is because during the development of IoT devices and applications, functionality and customer orientation still have the highest priority for the vendors. Even in times of increased connectedness, aspects regarding security and data protection are still neglected – be it for cost reasons, time pressure or limited processing performance.
What this means for smart cities and connected infrastructures, was demonstrated by security expert Cesar Cerrudo some time ago. On numerous trips through big American cities such as New York, Los Angeles or San Francisco, he demonstrated how thousands of traffic control sensors were vulnerable to attack. Cerrudo showed how information coming from these sensors could be intercepted from 1,500 feet away — or even by drone — made possible due to one company failing to encrypt its traffic data. This enables hackers and cybercriminals to manipulate traffic data, permitting them to cause faulty traffic light circuits, traffic jams, large-scale obstruction traffic or even dramatic accidents.
In the case of cyber-attacks on smart cities, millions of devices are potentially threatened by manipulations or malware infections. Therefore, a well thought out security strategy is indispensable. This starts with identifying and then prioritising the critical infrastructure. Only those who can identify and clear away vulnerabilities, security flaws, malicious environments, outdated operating systems, etc. in time, are able to prevent serious failures and manipulations.
The best possible protection against hacking attacks is a security solution that is embedded within the IoT application itself. Instead of constructing a fence around the device and its software, applications need to be hardened with effective protection solutions such as obfuscation or Whitebox cryptography as well as with advanced RASP (Runtime Application Self-Protection) technologies. Being protected in such a manner, the applications are able to protect themselves against all kinds of attacks with individually defined activities e.g. informing the provider of the IoT device that the software has been modified. Thanks to these application hardening technologies the application´s sensitive binary code – its crown jewels so to speak – is proactively protected.
Smart cities offer great opportunities, especially for rapidly growing cities which have to deal with population growth and increasing traffic loads. Nonetheless, in terms of IoT innovations security, data protection and privacy have to be top priority if they should be profitable in the long run. An important factor here is education. The issue of security must be top priority in all companies and organisations. Suppliers and vendors of IoT devices and technologies need to be better skilled and should dedicate time to discussing risks and informing their customers about possible threats.
Hybrid IT has gone from burgeoning trend to widespread adoption in only a few years, with businesses eager to opt for an environment that encompasses both on-premises and cloud infrastructure services.
A Q&A with Kong Yang, Head Geek™, SolarWinds.
The modern business is becoming more connected, thanks to the proliferation of increasingly smart technologies being integrated into the enterprise. As a result, hybrid IT has been more readily adopted as means to cope with the changes this represents, offering a more agile and scalable approach to integrating and delivering an IT environment.
The SolarWinds IT Trends Report 2017: Portrait of a Hybrid Organisation found that 92 percent of respondents said their organisations have migrated critical applications and IT infrastructure to the cloud over the past year. The key benefits cited by these respondents included improved efficiency, cost savings, and scalability, and nearly three in five organisations have received either most or all expected benefits since migration.
For all the promise of hybrid IT, many hurdles need to be overcome before an organisation can bring out its full potential. Sometimes, this is too much for organisations that don't have the resources—whether it's time or money or personnel —to commit to making a hybrid IT environment into a success.
The SolarWinds survey found that 22 percent of respondents migrated applications and infrastructure to the cloud, only to ultimately bring them back on-premises. The top reason for this was security and compliance, which is unsurprising, given the escalating level of threats posed by increasingly advanced, socially engineered cyberattacks.
Businesses rely upon data and a data breach can severely damage an organisation's resources and reputation. As a result, many IT professionals are wary of outsourcing services to the cloud, where security and governance is out of their full control.
Hybrid IT also represents a complete change in the way that an organisation's environment is run and, as a result, IT professionals need to adapt and learn how to manage its complexities. This has resulted in an IT skills gap, with six out of ten respondents citing this as the biggest hybrid IT challenge.
The skills gap has emerged due to the increasingly complex role that IT professionals now play in an organisation. In years past, when IT environments existed almost entirely on-premises, IT professionals had a clear sense of where their duties began and ended.
Now, cloud services have made IT professionals' remits more ambiguous, and it's harder to distinguish which aspects of an environment fall under their responsibility to manage, and which belong to the cloud service provider (CSP).
It also means that IT professionals are not only expected to understand and manage the performance and health of on-premises applications, but those based in the cloud, too. As the breadth of their duties grow, IT professionals are finding it harder to keep up and learn the skills demanded by organisations eager to make the most of hybrid IT.
Forty-five percent of the study's respondents do not believe that IT professionals entering the workforce possess the skills necessary to manage hybrid IT environments. This is a genuine concern for organisations that rely upon IT professionals to keep the revenue generating applications and systems of their business afloat, including maintaining application quality of service that end-users expect.
Steps are being taken to address the skills gap, with the survey finding that 57 percent of organisations have already hired or reassigned IT personnel for the specific purpose of managing cloud technologies. However, there are other methods that can be adopted to ensure the IT skills gap doesn't restrict you from seeing the full potential of hybrid IT.
One of these methods is adopting a comprehensive hybrid IT monitoring tool, which can address the lack of visibility IT professionals face across both on-premises and cloud environments. A monitoring tool can offer a data-centric, single-pane-of-glass across the entire application stack; thus, enabling IT professionals to better understand and manage the environment as a whole. A proper tool can break down the silos between on-premises and the cloud while quickly surfacing the single point of truth. In addition, it can also help build trust with cloud service providers. The visibility afforded by a monitoring tool can help ensure that IT professionals understand performance behaviors while keeping cloud service providers honest with their service level agreements. This is especially key during troubleshooting and remediation issues.
Next, monitoring as a discipline can be embraced to deliver greater results and bridge the technology skills gap. The rigor and discipline that comes with monitoring with discipline provides an IT professional with a consistent process to understand and manage their hybrid environment. Monitoring with discipline is encompassed by eight skills – discovery, alerting, remediation, troubleshooting, security, optimization, automation, and reporting. These core skills empower IT professionals to tackle any hybrid IT environment with confidence. Technology and processes change but the end goal remains the same and that is to keep the application running healthy and efficiently and the data secure.
In closing, the lack of clarity across a hybrid IT environment is one of its greatest challenges, so a comprehensive monitoring tool and a set of core skills can help ease this burden and help businesses maximize the potential benefits offered by hybrid IT.
Most of the news around cybersecurity tends to be negative, focusing on the latest new techniques deployed by attackers and the mounting costs as more and more organisations fall victims. While the volume and sophistication of attacks continue to rise however, the good news is that organisations are strengthening their defences as well.
By Brian Hussey, VP of Cyber Threat Detection & Response for SpiderLabs at Trustwave.
Our 2017 Trustwave Global Security Report, which examines the results of thousands of our investigations into security incidents, found that attack incident detection times have increased significantly over the last year. Across the incidents we investigated in 2016, the median time from intrusion to detection of a compromise had fallen to 49 days, down from 80.5 days in 2015. The figures encompass a huge range, from same-day detection to one particular case lasting for more than five years. Overall however, the trend is a positive one, particularly as we see the emergence of more sophisticated malware specifically designed to hide its presence for extended periods of time.
One of the most powerful techniques we see being used to evade detection is deploying malware that resides only in a system’s memory rather than on disk. Many traditional security measures search the system for a particular hash, and will find no trace of malware hidden in this way. A particularly prominent example of the last year was the PoSeidon malware family, which targets point-of-sale (POS) systems. The PoSeidon binary is a simple injector into svchost.exe, but while this still resides on disk, the credit card scraping malware only lives in memory.
Malware using this tactic can only be discovered through memory analysis, using a memory image to determine information about running programmes. While there are automated tools available to assist with analysis, an investigation generally needs a trained and experienced professional, who will then be able to reverse engineer it to contain the breach and close the infiltration vector.
The most popular technique we encountered during our investigations over the last year was the ability to download additional malware from a remote server, which was present in 36 per cent of malware analysed. The downloader function is usually an additional feature alongside the malware’s main attack, and enables the attacker to sustain and escalate their compromise even after the initial malware is discovered. Other popular tactics included using process injection to hide within another legitimate process on the system, or implementing a remote administration function to provide the attacker with a backdoor into the system.
While the overall detection times amongst organisations have improved considerably, there was a significant difference between incidents which were self-detected – either through their own internal teams or through a third-party service provider. These breaches were discovered an average of 60 percent faster compared to those found through an external party such law enforcement or a regulator.
The median detection time for internal discoveries was also just 16 days. Organisations that could detect breaches themselves were also able to contain the incident more quickly on average – an extremely important factor when every additional day leaves the attacker free to deal more damage.
in-depth investigation required to full understand and shut down a more advanced attack requires a specific set of experience, skills and technology not normally accessible to most IT teams.
An in-depth investigation is essential if an organisation is to be sure exactly what they were hit with and fully guarantee they are secure. The popular trend of blending malware together in a single attack means that there are frequently multiple different infections that must be tracked down and contained. In more sophisticated attacks, we also commonly see threat actors deploy multiple attacks simultaneously to mislead their victim. Ransomware and DDoS attacks for example are very visible and disruptive, providing the perfect cover for a subtler infiltration elsewhere on the system. We often encounter organisations that were confident they have contained an incident, only to be hit by a secondary breach through a compromise that had remained hidden.
An increasing number of organisations are turning to a managed security service (MSS) provider approach in order to access the resources and experience necessary to effectively contain and investigate an attack. This will provide a network of threat intelligence on the latest developments and attacks, and will also mean there is 24-hour access to a team of experienced security practitioners. Premium MSSPs offer Managed Detection and Response for Endpoints (MDRe) services, which allows for global teams of incident responders to threat hunt, respond to attacks, and remediate in real-time 24/7. This is the best methodology for proactive threat identification and response available in the security market today.
The way cyber tactics have evolved in recent years means it has become impossible to guarantee total security from attack, with the cyber criminals always working to overcome security defences with new tools and tactics. However, those organisations that have armed themselves with the expertise to identify unfolding threats and investigate when an incident does occur will be much better equipped to fend off the attackers and keep damage to a minimum.