Worldwide revenues for information technology (IT) products and services are forecast to reach nearly $2.4 trillion in 2017, an increase of 3.5% over 2016. In a newly published update to the Worldwide Semiannual IT Spending Guide: Industry and Company Size , International Data Corporation (IDC) estimates that global IT spending will grow to nearly $2.65 trillion in 2020. This represents a compound annual growth rate (CAGR) of 3.3% for the 2015-2020 forecast period.
Industry spending on IT products and services will continue to be led by financial services (banking, insurance, and securities and investment services) and manufacturing (discrete and process). Together, these industries will generate around 30% of all IT revenues throughout the forecast period as they invest in technology to advance their digital transformation efforts. The telecommunications and professional services industries and the federal/central government are also forecast to be among the largest purchasers of IT products and services. The industries that will see the fastest spending growth over the forecast period will be professional services, healthcare, and banking, which will overtake discrete manufacturing in 2018 to become the second largest industry in terms of overall spending.
Meanwhile, more than 20% of all technology revenues will come from consumer purchases, but consumer spending will be nearly flat throughout the forecast (0.3% CAGR) as priorities shift from devices to software for things such as security, content management, and file sharing.
"Consumer spending on mobile devices and PCs continues to drag on the overall IT industry, but enterprise and public sector spending has shown signs of improvement. Strong pockets of growth have emerged, such as investments by financial services firms and utilities in data analytics software, or IT services spending by telcos and banks. Government spending has stabilized, and shipments of notebooks including Chromebooks posted strong growth in the education market. Double-digit increases in commercial tablet spending will drive a return to growth for the overall tablet market this year, despite ongoing declines in consumer sales. These industry-driven opportunities for IT vendors will continue to emerge, even as the global economy remains volatile," said Stephen Minton, vice president, Customer Insights and Analysis at IDC.
On a geographic basis, North America (the United States and Canada) will be the largest market for IT products and services, generating more than 40% of all revenues throughout the forecast. Elsewhere, Western Europe will account for slightly more than 20% of worldwide IT revenues followed by Asia/Pacific (excluding Japan) at slightly less than 20%. The fastest growing regions will be Latin America (5.3% CAGR) followed by Asia/Pacific (excluding Japan) and the United States (each with a 4.0% CAGR).
IT spending in the United States is forecast reach nearly $920 billion this year and top the $1 trillion mark in 2020. While IT services such as applications development and deployment and project-oriented services will be the largest category of spending in 2017 ($275 billion), software purchases will experience strong growth (7.9% CAGR) making it the largest category by 2020. Business services will also experience healthy growth over the forecast period (6.0% CAGR) while hardware purchases will be nearly flat (0.5% CAGR).
"While we are seeing a tempering in growth for U.S. healthcare provider IT spending as we enter the post-EHR era, the diverse and innovative professional services industry is expected to exhibit the fastest growth over the life of the forecast. Combine tech-savvy talent with an information-based business, and one can envision the multitude of possibilities for IT in this segment. IT investments will be used to achieve goals related to the differentiation of products and services, improving client satisfaction, and increasing revenue," said Jessica Goepfert, program director, Customer Insights and Analysis at IDC.
In terms of company size, more than 45% of all IT spending worldwide will come from very large businesses (more than 1,000 employees) while the small office category (businesses with 1-9 employees) will provide roughly one quarter of all IT spending throughout the forecast period. Spending growth will be evenly spread with the medium (100-499 employees), large (500-999 employees) and very large business categories each seeing a CAGR of 4.3%.
“Global SMB software spending will surpass that of hardware in 2018, upending traditional IT spending habits. More mature SMBs already recognize the value of linking software investments to business processes, and by the end of the forecast, we expect most midmarket firms will be on a path to embrace digital transformation," said Christopher Chute, vice president, Customer Insights and Analysis.
"Changing SMB attitudes regarding the importance of technology investment cut across company size and region categories. Small and midsize firms in developing geographies are just as interested in leveraging technology as those in developed regions. This sets the stage for spending growth everywhere, especially in midsize firms," added Ray mond Boggs, program vice president, SMB Research.
Gartner, Inc. forecasts that 8.4 billion connected things will be in use worldwide in 2017, up 31 percent from 2016, and will reach 20.4 billion by 2020. Total spending on endpoints and services will reach almost $2 trillion in 2017.
Regionally, Greater China, North America and Western Europe are driving the use of connected things and the three regions together will represent 67 percent of the overall Internet of Things (IoT) installed base in 2017.
The consumer segment is the largest user of connected things with 5.2 billion units in 2017, which represents 63 percent of the overall number of applications in use (see Table 1). Businesses are on pace to employ 3.1 billion connected things in 2017. "Aside from automotive systems, the applications that will be most in use by consumers will be smart TVs and digital set-top boxes, while smart electric meters and commercial security cameras will be most in use by businesses," said Peter Middleton, research director at Gartner.
Table 1: IoT Units Installed Base by Category (Millions of Units)
Category | 2016 | 2017 | 2018 | 2020 |
Consumer | 3,963.0 | 5,244.3 | 7,036.3 | 12,863.0 |
Business: Cross-Industry | 1,102.1 | 1,501.0 | 2,132.6 | 4,381.4 |
Business: Vertical-Specific | 1,316.6 | 1,635.4 | 2,027.7 | 3,171.0 |
Grand Total | 6,381.8 | 8,380.6 | 11,196.6 | 20,415.4 |
Source: Gartner (January 2017)
In addition to smart meters, applications tailored to specific industry verticals (including manufacturing field devices, process sensors for electrical generating plants and real-time location devices for healthcare) will drive the use of connected things among businesses through 2017, with 1.6 billion units deployed. However, from 2018 onwards, cross-industry devices, such as those targeted at smart buildings (including LED lighting, HVAC and physical security systems) will take the lead as connectivity is driven into higher-volume, lower cost devices. In 2020, cross-industry devices will reach 4.4 billion units, while vertical-specific devices will amount to 3.2 billion units.
While consumers purchase more devices, businesses spend more. In 2017, in terms of hardware spending, the use of connected things among businesses will drive $964 billion (see Table 2). Consumer applications will amount to $725 billion in 2017. By 2020, hardware spending from both segments will reach almost $3 trillion.
Table 2: IoT Endpoint Spending by Category (Billions of Dollars)
Category | 2016 | 2017 | 2018 | 2020 |
Consumer | 532,515 | 725,696 | 985,348 | 1,494,466 |
Business: Cross-Industry | 212,069 | 280,059 | 372,989 | 567,659 |
Business: Vertical-Specific | 634,921 | 683,817 | 736,543 | 863,662 |
Grand Total | 1,379,505 | 1,689,572 | 2,094,881 | 2,925,787 |
Source: Gartner (January 2017)
"IoT services are central to the rise in IoT devices," said Denise Rueb, research director at Gartner. Total IoT services spending (professional, consumer and connectivity services) is on pace to reach $273 billion in 2017.
"Services are dominated by the professional IoT-operational technology category in which providers assist businesses in designing, implementing and operating IoT systems," added Ms. Rueb. "However, connectivity services and consumer services will grow at a faster pace. Consumer IoT services are newer and growing off a small base. Similarly, connectivity services are growing robustly as costs drop, and new applications emerge."
By 2021, the prevalence of equity analysts valuing organizations' information portfolios in valuing businesses themselves will spark formal internal information valuation and auditing practices, according to Gartner, Inc.
In a report containing a series of predictions about the rising importance of data and analytics, Gartner analysts said that although information arguably meets the formal criteria of a business asset, present-day accounting practices disallow organizations from capitalizing it. That is, the value of an organization's information generally cannot be found anywhere on the balance sheet.
"Even as we are in the midst of the information age, information simply is not valued by those in the valuation business," said Douglas Laney, vice president and distinguished analyst at Gartner. "However, we believe that, over the next several years, those in the business of valuing corporate investments, including equity analysts, will be compelled to consider a company's wealth of information in properly valuing the company itself."
A Gartner study showed how companies demonstrating "information-savvy" behavior — such as hiring a chief data officer (CDO), forming data science teams and engaging in enterprise information governance — command market-to-book ratios well above the market average.
"Anyone properly valuing a business in today's increasingly digital world must make note of its data and analytics capabilities, including the volume, variety and quality of its information assets," Mr. Laney said.
Initially, Gartner believes equity analysts and institutional investors will consider only a company's technical data and analytics capabilities and how its business model provides a platform for capturing and leveraging information, not the actual value of its information assets.
Gartner says boards and CEOs should not delay in hiring or appointing CDOs to begin optimizing the collection, generation, management and monetization of information assets before a critical mass of equity analysts starts asking related questions of them.
Gartner also predicts that by 2019, 250,000 patent applications will be filed that include claims for algorithms, a tenfold increase from five years ago.
Algorithm patents can be granted in the U.S., the EU and many other countries. Not all algorithms can be patented, but many can, even if the rules of application are not always straightforward.
According to a worldwide search on Aulive (named a Gartner Cool Vendor in 2016), nearly 17,000 patents applied for in 2015 mentioned "algorithm" in the title or description, versus 570 in 2000. Including those mentioning "algorithm" anywhere in the document, there were more than 100,000 applications last year versus 28,000 five years ago.
At this pace, and considering the rising interest in protecting algorithmic IP, by 2020 there could be nearly half a million patent applications mentioning "algorithm," and more than 25,000 patent applications for algorithms themselves.
Of the top 40 organizations patenting the most algorithms the past five years, 33 are Chinese businesses and universities. The only western company in the top 10 is IBM at No. 10.
"Despite their growing importance, too many great algorithms in enterprise are still left in the shadows. Many business leaders don't care too much so long as they 'work,'" said Mr. Laney. "But algorithms can make a great deal of difference. The list of important algorithms is endless. To name just a few: Google's PageRank algorithm, mp3, blockchain and backpropagation in deep learning."
Gartner recommends that data and analytics leaders work with business leaders and experts to adopt and develop methodologies for valuating algorithms and assessing which ones should be patented.
Top three tips to get value from digital-first strategies.
By Richard Whomes, Senior Director Sales Engineering, Rocket Software.
2016 was heralded by some as the year of digital transformation for businesses. You only need to look as far as your local post office or supermarket, both of which are likely to have replaced regular tills with self-service checkouts, to see the impact digitisation is having on everyday businesses. It’s no surprise that digital transformation has become a buzzword phrase in the last 12 months, with leaders across industries championing its ability to revolutionise company operations. However, the reality of the situation is that modernising business processes to bring them into the digital age is presenting a serious challenge for many; it’s not always easy to see what digital transformation looks like for your own business.
At the end of the day, any digital transformation project should have the customer front and centre. Whether the new processes are focused on providing personalised sales offers to customers by using more meaningful data analytics, or streamlining doctors’ appointments by installing automatic ‘check in’ machines in waiting rooms, the number one objective needs to be to improve the end-user experience.
Recent research has revealed that 81 per cent of CIOs believe legacy systems are having a negative impact on their businesses, but although there is clearly a case for infrastructure investment, the need to disrupt needn’t be tantamount to huge expenditure.
As Jason Kay, Chief Commercial Officer, IMS Evolve, explains, if the Internet of Things (IoT) solution you are considering for your business requires any kind of rip and replace, maybe you should think twice before taking the plunge as your existing infrastructure is actually smarter than you think.Using an IoT layer, businesses can tap into the available data locked within legacy machines. By integrating it with supply chain and merchandising systems as well as the fridge control systems in real time, the temperature of each fridge can be automatically managed to suit its specific contents. As a result, not only is energy consumption reduced, but a higher quality product can be achieved, resulting in a better customer experience.
Likewise, within the food manufacturing industry, IoT technology at the edge can provide the necessary insight to monitor each stage of the process when creating foods in batches. Consistency of both ingredients quantities and environmental factors can be regulated and the available data from each stage of the process united to ensure the highest quality, most profitable end product every time.
There’s no denying that the Industrial Internet of Things is gaining momentum, but legacy equipment should not be a setback to its fruition – it should be at the heart of it. Many enterprises have the data needed to modernise sitting unused in their current systems, and now is the time to unlock it. IoT offers a world of opportunity but it must be implemented in a way that makes business sense. By taking full advantage of existing infrastructure with an IoT layer, enterprises can efficiently and effectively transform their organisation in a way that supports their core business purpose.
It’s been an interesting time in the app market. Last year Apple boasted a record year for downloads from its app store, amidst industry cries that apps are dead. Apps such as Nintendo’s Super Mario Run and the cultural phenomenon of Niantic’s Pokémon Go have also reinvigorated hope that apps still appeal to the mass market. So, are the older statistics that suggest that smart phone users download only one app per month, now redundant? And, has the Lazarus that was the app come back to life?
By Mark Armstrong, VP and MD EMEA at Progress.
It does look like the clouds are clearing over the long term future for apps. However, rather than this being a consequence of consumers rejecting ‘app fatigue’ – it is in fact a result of new and advancing technologies that are changing our relationship with how we interact with apps.
Enterprises are catching up when it comes to delivering the same technologies that their employees use at home. Consumer apps offer sophisticated functionalities that business apps often can’t match due to finite resources. Yet, as VR and AR technologies become more affordable, businesses have the opportunity to make their apps as compelling or engaging as those that their employees or customers are used to.
However, to ensure adoption, it will be better for businesses to prioritise their app development. Users expect quality and have a limited tolerance for apps which don’t perform which is why 80 per cent of apps are deleted after just one use. So, spreading resources thinly to develop apps to solve every need won’t work. Concentrating on solving challenges that align with business priorities will help to narrow down the deliverables and increase the likelihood of developing an engaging app.
And, additionally, businesses should look to popular consumer apps such as Pokémon Go and Super Mario Run for inspiration. Both apps bring something people have wanted for years – Mario on their smartphones – while Pokémon Go brought a new kind of experience in AR (both have also been helped by lashings of nostalgia).
E-learning apps have had the most success to date in replicating such experiences through gamification. For example, Kineo worked with their client McDonald’s UK on a Till Training Game, which delivered an engaging and memorable learning experience to support the launch of a new till system to 1,300 restaurants. As well as trending on social media sites as learners set up self-styled leader boards to compete against each other.
Now businesses have the opportunity to layer on top both AR and VR experiences to capture users’ attention and to demonstrate that they are a digitally forward enterprise.
New technologies - which apps can tap into and integrate with - deliver excitement, interactivity and extra level of utility. And this type of interactivity is changing. The global Intelligent Virtual Assistant (IVA) market size is expected to reach $12.28 billion by 2024 according to a new report by Grand View Research. The virtual assistant is set to become the next new interface for applications. Gartner says: “Conversational systems shift from a model where people adapt to computers to one where the computer “hears” and adapts to a person’s desired outcome.”
So, apps built to capitalise on voice recognition technologies and AI are where enterprise apps will see real success, leveraging powerful decision-making intelligence and the natural input method of voice recognition to deliver improved interactivity, experience and utility. The information provided by the digital assistant is personalised, accurate and relevant - in other words every app’s dream scenario.
In 2017 will see tech giants double down on AI products that use user and context data such as location and time, and combine it with advanced search and powerful decision making to respond directly to voice requests and questions. Google opened up its assistant to developers in December 2016, allowing them to build "Direct and Conversation Actions" for Google Assistant. This is a huge opportunity for a new wave of applications.
The future of apps is looking more exciting than ever before. New technologies have sought to reinforce their relevance. The ways in which they are adapting to the way users want to use them, and also driving ease of use, is increasing their popularity once more. Businesses and app developers have the opportunity to take ‘best practice’ lessons from consumer applications and apply these to their long term app development strategy. In doing so, we will truly start to see a meta-app era where apps are truly smarter, genuinely interactive and can anticipate our needs in both the consumer and business domain.
Credentialing equates to lower risk – and lower insurance costs
There are multiple areas of potential risk in a data center environment that can cause incidents that would result in an insurance claim. Risks for a data center include accidents that damage the facility, potential for workplace injuries, and business risks from downtime events that impact the data center’s or its customers’ business continuity. By R. Lee Kirby, President of Uptime Institute and Stephen F. Douglas, Risk Control Director for CNA – Technology.
When insurers and underwriters evaluate a data center organization for coverage, they want to be certain that the risk profile of the facility is as low as possible. Considerations such as fire resistance are a component of assessing data center risk, but focusing only on facility risks leaves out the most important part of the picture.
It would be like insuring an automobile based solely on the quality of the vehicle design and manufacture, but failing to ascertain if a licensed driver is operating it.
In today’s global economy, data centers are critical. Organizations depend on 24 x 7 x 365 IT infrastructure availability to ensure that services to customers/end-users are available whenever they are needed. To provide and maintain this availability is not only a matter of designing and building the right facility infrastructure—it’s about how that facility is managed and operated on a day-to-day basis to safeguard the business-critical infrastructure.
Owners and operators must do what they can to ensure that the risk of incidents and downtime has been minimized, and prove this low risk profile to their insurers. Industry-recognized credentials can help validate that operating risks are being managed effectively.
Relying solely on the physical characteristics of the data center such as the construction, type of fire protection system, and proximity to flood and earthquake-prone areas; although important, leaves out very important considerations in evaluating the effectiveness of a service provider’s risk management program. Typically, the redundant infrastructure of engineered data centers does present a low frequency of loss when compared to other types of operations. However, there is a significant increase in reliance on these data centers by end users as more companies outsource to the cloud or house their primary or backup networks offsite. This increasing dependency of end users on a centralized, outsourced infrastructure presents opportunities for technology service providers to set themselves apart from the competition and manage risks by formally addressing operational controls.
In framing the risks that service providers are exposed to—and that insurers will be concerned with—it is important to view the operation in terms of what part of the “data supply chain” the service provider occupies or is responsible for. Infrastructure providers, such as a co-location provider, have a specific but related set of exposures as compared to a software as a service (SaaS) provider at the other end of supply chain. The various entities in these increasingly complex supply chains must make decisions about the viability of accepting, avoiding, mitigating or transferring these risks. The risks to the data supply chain include not only first party direct losses, but third party liability losses as well. Even the first party losses will differ based on the services provided. The primary risks to the data supply chain can be categorized as:
Regulations – regulations create compliance risks at all levels of the data supply chain. Regulatory impact is greatly dependent on the types of services offered, industries served, and the complex shared responsibilities of infrastructure and service providers and their clients. Just a few examples of regulatory frameworks that may have impact even down to the infrastructure level include U.S. regulations such as HIPAA, GLBA, FISMA; international regulations such as the EU Data Protection Directive and industry standards such as the PCI DSS. In these complex regulatory environments, regulatory enforcement actions are common and the impact of fines and penalties is growing.
From 20 years of collecting incident data, Uptime Institute has determined that human error (i.e., bad operations) is responsible for approximately 70% of all data center incidents. Compared to this, the threat of “fire” (for example) as a root cause is dwarfed: data shows only 0.14% of data center losses are due to fire. In other words, bad operations practices are 500 times more likely to negatively impact a data center than fire. An outage at a mission critical facility can result in hundreds of thousands of dollars or more in losses for everything from equipment damage and worker injuries to lost business and penalties for failure to maintain contractual Service Level Agreements.
For both data center operators and insurers, there are some key questions to ask:
As discussed above, increased outsourcing, flexible cloud architecture, and resilient network infrastructures are creating increasing dependency on third party providers at all levels of the data supply chain. This increased dependency is creating greater liability risk for service providers.
Managing liability risks starts with contracts. A clear scope of work and allocation of risk between the contracting parties is essential. Clauses such as service level agreements, limitation of liability, force majeure, wavier of subrogation and indemnification wording reinforce the intended allocation of risk. Complex multiparty contract disputes are common particularly when significant losses are incurred. Claims of negligence are non-contractual, so even well executed contracts may not mitigate significant liability losses.
Data center operations credentials are another means of mitigating liability risks. In addition to reducing the probability of loss, clearly defined repeatable procedures and processes demonstrate adhering to a duty of care that is foundational to most standards of care. As with any human endeavor, residual risk will remain regardless of mitigation efforts. Insurance provides a means of risk transfer particularly effective on high severity risks.
Uptime Institute has provided data center expertise for more than 20 years to mission-critical and high-reliability data centers. It has identified a comprehensive set of evidence-based methods, processes, and procedures at both the management and operations level that have been proven to dramatically reduce data center risk, as outlined in the Tier Standard: Operational Sustainability.
Organizations that apply and maintain the Standard are taking the most effective actions available to protect their investment in infrastructure and systems and reduce the risk of costly incidents and downtime. The elements outlined in the Standard have been developed based on the industry’s most comprehensive database of information about real-world data center incidents, errors, and failures: Uptime Institutes’ Abnormal Incident Reporting System (AIRS). Many of the key Standards elements are based on analysis of 20 years of AIRS data collected on thousands of data center incidents, pinpointing causes and contributing factors.
To assess and validate whether a data center organization is meeting this operating Standard, Uptime Institute administers the industry’s leading operations certifications. These independent, third-party credentials signify that a data center is managed and operated in a manner that will reduce risk and support availability. There are two types of operations credentials: Tier Certification of Operational Sustainability (TCOS) and The Management & Operations (M&O) Stamp of Approval.
Both credentials are based on the same rigorous Standards for data center operations management, with detailed behaviors and factors that have been shown to impact availability and performance. The Standards encompass all aspects of data center planning, policies and procedures, staffing and organization, training, maintenance, operating conditions, and disaster preparedness.
The data center environment is never static; continuous review of performance metrics and vigilant attention to changing operating conditions is vital. The data center environment is so dynamic that if policies, procedures, and practices are not revisited on a regular basis, they can quickly become obsolete. Even the best procedures implemented by solid teams are subject to erosion. Staff may become complacent, or bad habits begin to creep in.
There is tremendous value for organizations that hold themselves to a consistent set of standards over time, evaluating, fine tuning, and retraining on a routine basis. This discipline creates resiliency, ensuring that maintenance and operations procedures are appropriate and effective, and that teams are prepared to respond to contingencies, prevent errors, and keep small issues from becoming large problems.
Insurance is priced competitively based on the insurers assessment of the exposure presented. Data center operations credentials provide the consistent benchmarking of an unbiased third party review that can be used by service providers at all levels of the data supply chain to demonstrate the quality of the organization’s risk management efforts. This demonstration of risk quality allows infrastructure and service providers to obtain more competitive terms and pricing across their insurance programs.
When data centers obtain the relevant Uptime Institute credential, it results in a level of expert scrutiny unmatched in the industry, giving insurance companies the risk management proof they need. Insurers can validate risk level to a consistent set of reliable Standards. As a result, facilities with good operations, as validated by TCOS or M&O Stamp of Approval, can benefit from reduced insurance costs.
Kevin Scott-Cowell, UK Managing Director of 8x8, gives his predictions for 2017’s top communications technology trends.
Already just under half (49%) of workers say flexible-working arrangements and work-life balance are the most important benefits companies can offer them. And millennials say the ability to work remotely is one of the top three perks they consider when evaluating a job opportunity. 2017 will see the first of Generation Z – those born in the late nineties – begin to enter the workforce, and research suggests the idea of a traditional 9-5 office day is alien to them. All of this means we expect to see a boom in adoption of technology. This will help seamless remote working technology which empowers people to do their best work wherever they need to be. We should also see more contact centres using home based agents, with the cloud keeping them connected.
In his Autumn Statement, the Chancellor unveiled the new National Productivity Investment Fund (NPIF), a long term project designed to make the public more efficient in the workplace, improve the economy and people's work-life balance in the process. This includes a £1 billion investment in the UK’s digital infrastructure to be used for new fibre broadband and 5G connectivity. This will provide welcome relief for many remote workers and accelerate the adoption of cloud systems that give them access to sophisticated communications tools on the go. For example, the ability to switch calls between your mobile, desk phone and softphone or jumping from IM to a video meeting instantaneously becomes even easier with faster, more reliable connectivity.
The ability to track and analyse calls has been around for a while and has been used predominantly in contact centre environments. 2017 will see wider adoption of call analytics platforms across businesses as a way to make better decisions based on Big Data. Call analytics platforms are now easier to deploy across any department. The data can be used to improve employee performance, sales campaigns, customer experience management and offers greater insight into staffing requirements. For example, a sales team can model the sequence, timing and length of calls that are most likely to result in a successful sale.
Whilst the weaker pound is a challenge for many of us, it is creating opportunities for exporters - UK factories had their best month in more than two years in September. We expect companies selling overseas to capitalise on this by expanding their global presence, deploying cloud based communications systems to aid them.
While staff spread throughout the globe bring communications challenges, cloud systems help ensure they can act as one joined up team. Businesses will still want to conduct face to face meetings but the inconvenience and cost of travel make this unfeasible. Expect video conferencing to grow as a substitute in this sector.
1. Cloud based contact centres will become more common that ‘traditional’ ones for the first time
40% of UK based organisations used cloud-based contact centres in 2016 according to the Cloud Industry Forum1: we expect this growth to continue into 2017. The shift has been driven by a number of factors including: the need to manage budgets more effectively by switching from a capex to an Opex model; the ability to cost effectively add or remove seats based on fluctuations in demand; and companies wanting to transform the experience they offer their customers. This last driver is now seen as the priority for contact centre managers as the delivery of exceptional customer service from the contact centre is now seen as a key differentiator for brands.
2. Automation will grow - but not at the expense of people
The use of chatbots and AI (Artificial Intelligence) has been growing steadily within contact centres in 2016 and will continue in 2017. We don't view this as a threat to human agents - far from it. We expect to see companies invest in AI as a way to reduce the routine, ‘easy to fix’ issues which otherwise suck time from experienced agents who are far better placed to deal with more complex queries. Humans remain the best problem solvers, and I'm yet to find a robot that is able to effectively empathise with a distressed customer. 2017 will see companies invest in technology that augments the decision making and problem solving power of agents.
3. Multimedia contact centres will go mainstream
Voice remains a critical channel, but consumers now want to interact with companies across email, SMS, social and webchat as well. And they’ll vote with their feet if companies don’t think service levels are up to scratch. Research we conducted in 2016 revealed that more than a quarter (26%) of young people aged 25-34 have started searching for competitors online during a poor contact centre interaction. That’s why companies increasingly tell us that they want their agents to serve customers across all channels from the same, easy-to-learn interface. This shift towards multi media contact centres will become a necessity rather than a ‘nice to have’ for companies in 2017.
1. Cloud Industry Forum- UK Cloud adoption snapshot & trends for 2016
There are clear signs of maturity and still further evolution in the managed services market. Recent research continues to show high double digit growth in managed services for the next five years at least. The core reason for this is that it works, offering high effectiveness and performance, but to take full advantage, there is still a need for channels and MSPs themselves to keep up to speed on the latest developments, perhaps even more on the sales, management and development side than the technical aspects. The Managed Services and Hosting Summit 2017 (http://mshsummit.com/amsterdam) will focus on how the market has evolved and how the Managed Service Provider is helping organisations to succeed in this brave new digital world, but also on what MSPs need to know in 2017.
A 360Market research report in February 2017 - The Cloud-based Managed Services Market - predicts growth of 19.7% CAGR during the period 2016-2020. It says that the market drivers are the same as ever: the need to gain competitive edge, while the challenge to all players is the lack of integration expertise, both in customers’ organisations and the channels supplying them.
The research highlights a key trend in how demand for mobility services is spreading across industry verticals – something which MSPs themselves have noted, and have geared up to provide across diverse industries for improved data security, productivity, and privacy.
Cloud-based managed services consist of a wide range of services that help organizations to monitor, regulate, and improve the IT infrastructure of an organization and all these aspects are important. These services offer the advantages of cost containment and reduced inventory, which are attractive to customers, but the managed services industry still needs to arm itself with a clear sales message and the ability to convey all the implications of managed services through its sales message. These are required by organisations to develop an economical cost structure and minimise expenditure and to provide a solid foundation to build on in the future.
Until recently services were still unknown and some organisations and institutions were both reluctant to make any changes and rather sceptical about the security and privacy systems in the cloud services. However, now that larger companies are comfortable with the concept of the cloud services and are looking to move more and more of their IT requirements to this model, the advantages are being seen by smaller business. At the same time, new models of data management and control through the use of IoT are now appearing on customers’ lists of requirements.
So this model is not all-pervasive - many small and medium businesses lack the technical expertise that is required to make the conversion to the cloud, and some channels are still primarily following the traditional break-fix model. So the process of change is still accelerating. Reaching the SMB and IoT sectors is expected to provide lucrative opportunities to managed services providers. The segment of managed mobility services is anticipated to surge at a considerably high CAGR during the next couple of years. With the growing use of tablets, smart phones, and other different mobile devices, the growth opportunities in the managed mobility services market have also surged.
The Managed Services and Hosting Summit will examine the role of managed services in a digital world - The sessions will focus initially on the evolution of the digital marketplace, how Managed Services need to evolve in this digital era and how governments and the European Community are driving this marketplace. The event then breaks into two streams. The first, ‘Behind the Service’, will be a series of talks focused on the latest technologies and practices around the infrastructure and services vital to delivering the managed service offering. The second stream, ‘Delivering the Service’, will address some of the key issues around delivering a first class user experience.
The keynote talks following this will focus on how Managed Services can help organisations secure their financial future by the exploitation of the era of the ‘Industrial Internet’ as well as secure their own and customers’ digital information against a background of mounting international fraud and cyber-attacks. Speakers and panellists will share their knowledge and insight into the opportunities and threats that organisations now face and what they can expect as digitalisation spreads throughout all facets of business operations. For full details of the agenda see http://mshsummit.com/amsterdam/agenda.php
[optional extra para] The network management services segment is likely to hold the largest market share in the IoT Managed Services Market, says research, and this is another aspect to be considered by supply channels and MSPs. Network management deals with the entire network chain of an organisation. It is essential to enhance the network for optimum utilization of the available resources. Network management services assist in analysing the amount of data transferring over a network and automatically routes it, to avoid congestion that can result in crash of the network. Opting for managed services can help organisations with reduced downtime, better network connectivity, safety, security, automatic device discovery, scalability, and seamless operation of the business process, but the MSP, integrator and other channels need to understand the management issues and layers of responsibility.
A one-day end-user
conference on flash and SSD storage technologies and their benefits for IT
infrastructure design and application performance.
1st June 2017 – Munich / 15th June 2017 - London
Since the very early days of flash storage the industry has gathered pace at an increasingly rapid rate with over 1,000 product introductions and today there is one SSD drive sold for every three HDD equivalents. According to Trendfocus over 60 million flash drives shipped in the first half of 2016 alone compared to just over 100 million in the whole of 2015.
FLASH FORWARD brings together leading independent commentators from the UK, Germany and the USA, experienced European end-users and most of the key vendors to examine the current technologies and their uses and most importantly their impact on application time-to-market and business competitiveness.
Divided into four areas of focus the conference will carry out a review of the technologies and the applications to which they are bringing new life together with examining who is deploying flash and where are the current sweet spots in your data centre architecture. The conference will also examine what are the best practices that can be shared amongst users to gain the most advantage and avoid the pitfalls that some may have experienced and finally will discuss the future directions for these storage technologies.
In London the keynote speakers and moderators are confirmed as Chris Mellor of The Register, Ken Male from TechTarget, Randy Kerns of Evaluator Group and the widely read blogger Chris Evans while in Munich we have respected analyst Dr. Carlo Velten, Jens Leischner from the user community, Bertie Hoermannsdorfer of speicherguide.de and André M. Braun representing SNIA Europe delivering the main conference content.
Sponsors include Dell/EMC, Fujitsu, IBM, Pure Systems, Seagate, Tintri, Toshiba, and Virtual Instruments and both events are fully endorsed by SNIA Europe.
Pre-register today at www.flashforward.io
Experience all the key technologies for the digital transformation in one place – at CeBIT in Hannover!
What was pure science fiction just a few years ago has become reality today and limitless business opportunities lie ahead.
CeBIT is the world’s foremost event on the wave of digitalization revolutionizing every aspect of business, government and society. Every year, the show features a lineup of around 3,000 exhibitors and attracts some 200,000 visitors to its home base in Hannover, Germany. The spotlight is on all the latest advances in fields such as artificial intelligence, autonomous systems, virtual and augmented reality, humanoid robots and drones.
Thanks to a rich array of application scenarios, CeBIT makes digitalization tangible in the truest sense of the word. “d!conomy – no limits”, the chosen lead theme for 2017, underscores the show’s emphasis on revealing the wealth of opportunities arising from the digital transformation. As a multifaceted exhibition/conference/networking event, CeBIT is a perennial must for everyone involved in the digital economy. The startup scene is also right at home at CeBIT and its dedicated SCALE 11 showcase, which sports more than 400 aspiring young enterprises.
The next CeBIT will be staged next month from 20 to 24 March 2017, with Japan as its official Partner Country and, on 19 March, Shinzo Abe, the Prime Minister of this year’s Partner Country, Japan, and German Chancellor Angela Merkel will officially open CeBIT 2017 in the presence of more than 2,000 VIP guests at the Welcome Night ceremony in Hall 9 of the Hannover Exhibition Center. The Partner Country, alone, is fielding around 120 companies to appear in every segment of the show.
Artificial intelligence, humanoid robots, virtual reality: new technologies are constantly pushing the boundaries of what is possible. What does this mean for society, the economy and concretely for your company? What new business models bring the most value? Dive into the fascinating digital future – at the world's biggest digitization showcase and experience all the big digital trends and highlights:
The integration of a trade show with a complementary conference program not only creates the ideal setting for generating new business, but also facilitates effective networking and a cross-industry knowledge transfer and dialogue between experts.
We have secured a limited number of complimentary exhibition tickets - Click here to get yours! (link: http://www.cebit.de/promo?qgpew
The CeBIT Global Conferences
#cgc17 will revolve around the slogan "Explore the Digital World!". The five day conference will delve deep into the realms of artificial intelligence, cyborgs & biohacking, robots, virtual worlds, the Darknet and cybercrime. The conferences in hall 8 bring together IT suppliers and users, Internet firms and investors, as well as creative and future-oriented thinkers.
This year’s conferences boast a truly stellar line-up of speakers. Among the big names are Google engineer and futurist Ray Kurzweil, the famed roboticist Hiroshi Ishiguro, and Stanford professor and social researcher Michal Kosinski. An expert in psychometrics, Kosinski has developed a mathematical method that analyzes Facebook likes and other publicly available data to determine people’s personality traits. The speaker list also includes someone who is arguably the world’s most famous whistleblower – Ed Snowden. Snowden will be joining the proceedings via live stream from Moscow.
Purchase your ticket to the CeBIT Global Conference http://www.cebit.de/en/conferences-events/cebit-global-conferences/cgc-tickets/
For more information visit www.cebit.de
By Steve Hone CEO, DCA Trade Association
This month’s theme is Service Availability and Resilience. It’s only natural that every data centre wants to ensure they are as resilient as possible.
Data centre owners spend hundreds of thousands on technology trying to achieve this goal and then tens of thousands to keep it maintained both from a support and power perspective. The other holy-grail that everyone seems to chase is the lowest PUE figure possible.
These two objectives are actually diametrically opposed to one another as the more resilience you build into your data centre the more inefficient it becomes as you need more energy to maintain the infrastructure you have running. This often has the perverse effect of sending your PUE up, not down. It is worth noting at this point that the “E” in PUE stands for “Effectiveness” NOT “Efficiency” which is a mistake often made and this yet again puts a different prospective on it.
On the subject of effectiveness, it is also worth noting that just because you think the facility you run or use is resilient on paper (e.g. N+ this and N+ that) please don’t automatically assume these measures will actually be “effective” in ensuring your service remains ‘Available’. Making sure you have fully tested processes and that a business continuity plan is in place is just as important, and some would argue more important, than the hardware itself. After all, when things go wrong, and they will, it is normally human error which is ultimately to blame.
Advertisement: Riello UPS
One very real example of this happened recently to one of the largest cloud providers to the insurance world, having suffered a major service outage at its third party colocation data centre despite boasting a 99.99999% uptime record, all eyes quickly turned to the data centre provider as the guilty party. I am sure there will be lessons learnt by the colocation provider, however I can’t help wondering if the ultimate reasonability for this outage actually lies with the managed service cloud provider for not asking the right questions and not planning more effectively for the worst.
Having a regularly tested business continuity strategy with your suppliers is critical, in fact it is often referred to as an insurance policy to help protect your business and your clients. The irony of this story is that it involved one of the largest providers of cloud based services to some of the top global insurance companies – who all failed to recognise the risk or value in investing in an insurance policy of their own.
It would be unfair to single out this one incident, provider or sector as I’m sorry to say this story is not uncommon, there are clear lessons to be learnt here for all clients seeking cloud based services and for all cloud providers seeking a hosting provider to deliver and underpin their offering.
Problems do occur irrespective of how resilient the provider says their service is and when it happens the SLA won’t save you. It is vital you do your due diligence, ignorance is no defence in the eyes of the law and no defence in the eyes of very unforgiving clients.
I would like to thank all those that contributed articles this month. Next edition is an opportunity for your customers to speak for you in the form of client case studies so if you would like to submit then please forward them to:
Kieranh@datacentrealliance.org
Deadline for submissions is the 15th March.
Finally, The DCA has an update seminar the afternoon before DCW 2017 at the Excel on the 14th March, if you are a DCA member or someone interested in finding out more about the Trade Association and the value it can deliver, you are more than welcome to register and attend. Full details are available on the DCA website:
By Prof Ian F Bitterlin, CEng FIET, Consulting Engineer & Visiting Professor, University of Leeds
We’ve all seen the claims for 99.999% uptime in data centre SLAs (service level agreements) and adverts for UPS, promising the holy-grail of ‘five-nines’ Availability. But what does it mean? By itself, without any explanation or supporting statements, any claim for any percentage is meaningless along similar lines to Sam Goldwyn’s ‘a verbal contract isn’t worth the paper it’s printed on’.
To understand why I claim it to be ‘meaningless’ we simply have to consider how we calculate the percentage Availability in the first place. It could not be easier; you need just two numbers and the ability to add, divide and multiply by 100. The two numbers are usually number of hours, the MTBF (mean time between failures) and the MDT (mean down time). If you divide MTBF by (MTBF+MDT) and multiply the answer by 100 you have the percentage uptime. Simply put, it’s the ratio of the time between failures divided by the total elapsed time. So, 1 hour MDT every 25,000h (with one year being 8760h) results in 99.996% Availability.
Unfortunately, so does a 4 hour MDT every 100,000h, or 15 minutes every 6250h. Now we see the trick because, let’s face it, we are more interested in having a system that doesn’t fail for 11 years, but when it does takes 4 hours to fix than in a system that fails nearly every 9 months but only takes 15 minutes to fix. The load, the ICT system, can often take several hours to reboot and in the process much transient data can have been lost forever. So ‘Availability’ isn’t a good, or an informative, metric. We are, clearly, much more interested in MTBF and, we have to presume, the person doing the Availability calculation has used an MTBF and MDT figure to create the percentage – rather than just guess it?
Let’s look at the two simple examples I opened with. First the data centre uptime SLA. Now 99.999% will result in a break of once per year of approximately 5 minutes (a very poor data centre by any standards) or, more attractively, a break of 52.5 minutes once every 10 years. If you consider that the power system can fail for 10ms (10 thousandths of a second) and the load can be lost then it is vital that that the 52.5 minutes is in one single failure event and not an accumulation of 315,000 very short events!
However, a promised 99.999% presents problems for the M&E systems. If we consider that the ‘availability’ of a data centre depends upon mainly human error (which you can’t model) but on the product of power, cooling, communication and fire suppression and inadvertent action of the EPO (Emergency Power Off) button, then each system will have to provide close to 99.99999% uptime – a very ambitious and expensive target.
Secondly let’s consider the frequent claims for modular UPS systems with 99.999% uptime. Firstly, that rarely includes the power distribution between the UPS and load but we should ignore that for now. The point is that most UPS are limited to the MTBF of their output circuit breaker which is in the order of 250,000h and when you model the whole UPS all systems, largely regardless of technology or architecture, tend to 80,000-100,000h MTBF.
The only way to get 99.999% is to assume, without telling anybody, that the modular UPS is fixed within 15 minutes. This means fixed by the client himself, assuming he can, assuming he has the spares on site and not waiting 4-8 hours for the service engineer – all highly unlikely. If you use 8 hours for the ‘fix’ then the same MTBF produces an Availability of 99.99%, four not five ‘nines’. Herein lies a perception problem – 99.99% doesn’t look much worse than 99.999% but the difference could be disastrous for a data centre manager.
So, what ‘should’ we do? Well, that is also easy. A ‘proper’ SLA would read ‘an Availability of 99.999%, measured over a period of 10 years and defined as one failure event’. You will notice that you can now see the MTBF (87,600h) and you can (now) calculate the assumed MDT as 52.5 minutes. Mind you it isn’t as attractive for the marketing department as ’99.999%’ is much punchier!
By Wendy Torell, Senior Research Analyst, Schneider Electric’s Data Centre, Science Centre
The migration of critical applications from traditional data centres to the cloud has garnered much attention from analysts, industry observers and data centre stakeholders. However, as the great cloud migration transforms the data centre industry, a smaller, less noticed revolution has been taking place around the non-cloud applications that have been left behind. These “edge” applications have remained on-premise and because of the nature of the cloud, the criticality of these applications has increased significantly.
The centralised cloud was conceived for applications where timing wasn’t absolutely crucial. As critical applications shifted to the cloud, it became apparent that latency, bandwidth limitations, security and other regulatory requirements were placing limits on what could be placed in the cloud. It was deemed, on a case-by-case basis, that certain existing applications (e.g. factory floor processing), and indeed some new emerging applications (like self-driving cars, smart traffic lights and other “Internet of Things” high bandwidth apps), were more suited for remaining on the edge.
Considering the nature of these rapid changes, it is easy for some data centre planners to misinterpret the cloud trend and equate the decreased footprint and capacity of the on-premise data centre with a lower criticality. In fact, the opposite is true. Because of the need for a greater level of control, adherence to regulatory requirements, low latency and connectivity, these new edge data centres need to be designed with criticality and high availability in mind.
The issue is that many downsized on-premise data centres are not properly designed to assume their new role as critical data outposts. Most are organised as one or two servers housed within a wiring closet. As such, these sites, as currently configured, are prone to system downtime and physical security risks and therefore, require some rethinking.
Systems redundancy is also an issue. With most of the applications living in the cloud, when that access point is down, employees cannot be productive. The edge systems, when kept up and running during these downtime scenarios, help to bolster business continuity.
Advertisement: Schneider Electric
In order to enhance critical edge application availability, several best practices are recommended:
Enhanced security – When you enter some of these server rooms and closets, you typically see unsecured entry doors and open racks (no doors). To enhance security, equipment should be moved to a locked room or placed within a locked enclosure. Biometric access control should be considered. For harsh environments, equipment should be secured in an enclosure that protects against dust, water, humidity, and vandalism. Deploy video surveillance and 24 x 7 environmental monitoring.
Dedicated cooling – Traditional small rooms and closets often rely on the building’s comfort cooling system. This may no longer be enough to keep systems up and running. Reassess cooling to determine whether proper cooling and humidification requires a passive airflow, active airflow, or a dedicated cooling approach.
DCIM management – These rooms are often left alone with no dedicated staff or software to manage the assets and to ensure downtime is avoided. Take inventory of the existing management methods and systems. Consolidate to a centralised monitoring platform for all assets across these remote sites. Deploy remote monitoring when human resources are constrained.
Rack management – Cable management within racks in these remote locations is often an after-thought, causing cable clutter, obstructions to airflow within the racks, and increased human error during adds/moves/changes. Modern racks, equipped with easy cable management options can lower unanticipated downtime risks.
Redundancy – Power (UPS, distribution) systems are often 1N in traditional environments which decreases availability and eliminates the ability to keep systems up and running when maintenance is performed. Consider redundant power paths for concurrent maintainability in critical sites. Ensure critical circuits are on emergency generator. Consider adding a second network provider for critical sites. Organise network cables with network management cable devices (raceways, routing systems, and ties). Label and colour-code network lines to avoid human error.
A systematic approach to evaluating small remote data centres is necessary to ensure greatest return on edge investments. Schneider Electric White Paper 256, “Why Cloud Computing is Requiring us to Rethink Resiliency at the Edge” provides a simple method for organising a scorecard that allows IT managers to evaluate the resiliency of their edge environments.
Wendy Torell is a Senior Research Analyst at Schneider Electric’s Data Center Science Center. In this role, she researches best practices in data center design and operation, publishes white papers & articles, and develops TradeOff Tools to help clients optimize the availability, efficiency, and cost of their data center environments. She also consults with clients on availability science approaches and design practices to help them meet their data center performance objectives. She received her bachelor’s of Mechanical Engineering degree from Union College in Schenectady, NY and her MBA from University of Rhode Island. Wendy is an ASQ Certified Reliability Engineer.
By Michelle Reid, Board Director, Telehouse Europe
The hyper-connected economy - where people, places, organisations and objects are linked together as never before, presents data centre providers with both opportunities and challenges. The market for new facilities is on a sustained upward trajectory, and predicted to be worth $32.3 billion by 2020(1). The popularity of mobile video services, the emergence of new business models based around the Internet of Things, and the widespread use of cloud services is underpinning strong demand for data storage and transmission, leading to the construction of a string of new data centres around the world. These new facilities are tasked with meeting an ever-growing requirement for connectivity, resilience and scalability, across diverse platforms and partners.
The architecture of today’s data centres needs to be specifically designed to meet customer demands for connectivity, guaranteeing an environment that is resilient, secure, and provisioned with low-latency links to a wide range of business partners.
The different architectural models meeting this demand come in two forms. Large cash-rich organisations such as Facebook, Amazon, Microsoft and Apple have invested in gigantic new facilities costing hundreds of millions of dollars.
The market for colocation centres, meanwhile, where equipment, space and bandwidth are available for rent to retail customers, continues to thrive. These two strands of data centre provision combine to make a flourishing sector where investment levels remain robust.
Indeed, the data explosion shows no signs of abating, driven by several key factors. These include: insatiable demand for mobile video services delivered through social media platforms and Over the Top (OTT) players; the emergence of new business models based around the Internet of Things; the shift from locally managed hardware to cloud computing and the adoption of more complex data privacy legislation across the European Union.
While demand for data storage is likely to remain buoyant, rapidly-evolving communication technologies are likely to require new thinking around data centre architectures. Predicted trends include the emergence of so-called ‘edge’ data centres, with the potential to provide enterprises that have a highly distributed customer base faster access to applications and even more processing power.
Advertisement: 8Solutions
The growth of mobile content, cloud services, and the emergence of IoT-enabled networks in both consumer and industrial sectors, means the ‘edge’ of data centres needs to be closer to users in order to reduce latency and increase the cost efficiencies of data transfer.
First and foremost, it’s about visual networking - where video streaming, high-speed networks, and interactivity come together to allow consumers to communicate, share or receive information over the Internet, when, where and how the user wants it. Visual networking has become one of the most dominant trends of modern times, transforming in a very short space of time video content from long-form movies and broadcast television programming to a database of segments or ‘clips’ and social network annotations.
These days, individuals and businesses are actively pursuing new combinations of video and social networking across a wide range of entertainment and communications. This is resulting in the creation of unprecedented amounts of data that need to flow across networks reliably, predictably and with low-latency.
A Visual Networking Index, produced by networking equipment specialist Cisco, tracks and forecasts the impact of visual networking applications, revealing the data challenge that lies ahead. The latest version of the Visual Networking Index (See Figure 1), produced earlier this year, predicts annual global IP traffic will pass the zettabyte (ZB), equivalent to 1,000 exabytes (EB) or 1 billion terabytes (TB), threshold by the end of 2016, and will reach 2.3 ZB per year by 2020.
Overall, IP traffic will grow at a compound annual growth rate of 22 per cent from 2015 to 2020, while monthly IP traffic will reach 25 GB per capita by 2020, up from 10 GB per capita in 2015. The growth of global cloud traffic has sky rocketed over the course of the past five years and Cisco predicts that cloud traffic will be responsible for 92% of all data centre traffic by 2020(2).
Cloud service providers need to offer a service that is ‘always on’ and is hosted in a secure environment that enables low latency access to enterprise customers in the most efficient manner possible.
Data centre providers have been tasked with meeting this buoyant demand, particularly through the provision of highly connected data centre facilities which offer an eco-system of business partners and organisations.
The above market shifts are key drivers of the hyper-connected economy and are placing huge pressure on data centre infrastructure, with companies racing to add extra capacity, scalable bandwidth, scalable high density power options and true redundancy. This activity is illustrated by KDDI-owned Telehouse Europe, which in 2016, launched the first phase of Telehouse North Two, its new data centre in London, deliver clients 24,000 sq. m of gross area across an 11-storey building located in Telehouse’s existing Docklands campus. This expansion takes its overall footprint in London to more than 73,000 sq. m. North Two is fully integrated within the Docklands campus, enabling established customers and new clients to make the most of connectivity across the wider site and cross its interconnected network of 48 data centres worldwide.
Connectivity and High Density/Future proof power is key to data centre performance
In conclusion, it’s clear that the hyper-connected economy is underpinning a vibrant data centre sector, with plenty of scope for global growth. The rapid pace of technological advancement in areas such as mobile video services, cloud computing and the emergence of the Internet of Things means that the requirement for resilient data storage and transmission is being propelled forward at an unprecedented rate.
The challenge, for data centre providers, is to ensure that they have the capacity and the flexibility to meet customers’ needs, increasing the requirement for connectivity – both in terms of access to application platforms and partners as well as high density power options to meet ever growing, power demanding, virtualisation and cloud based services. It’s only through the continued investment in modern, efficient data centre infrastructure that the hyper-connected economy will reach its full potential.
Further reading
http://www.marketsandmarkets.com/PressReleases/data-center-construction.asp
http://www.datacenterdynamics.com/content-tracks/colo-cloud/research-cloud-to-be-responsible-for-92-percent-of-data-center-traffic-by-2020/97297.article
By Giordano Albertazzi, President EMEA, Vertiv
In 2016, global macro trends significantly impacted the industry, with new cloud innovations and social responsibility taking the spotlight. As cloud computing has integrated even further into IT operations, the focus will move to improving underlying critical infrastructure as businesses look to manage new data volumes. Vertiv believe that 2017 will be the year that IT professionals will invest in future-proofing their data centre facilities to ensure that they remain nimble and flexible in the years to come.
Here are the key infrastructure trends we see shaping the data centre ecosystem in 2017:
However, while energy efficiency remains a core concern, water consumption and refrigerant use are important considerations in select geographies. Data centre operators are tailoring thermal management based on location and resource availability, and there has been a global increase in the use of evaporative and adiabatic cooling technologies which deliver highly efficient, reliable and economical thermal management. Where water availability or costs are an issue, waterless cooling systems such as pumped-refrigerant economisers have gained traction.
Advertisement: Riello UPS
While data breaches continue to garner the majority of security-related headlines, security has become a data centre availability issue as well. As more devices get connected to enable simpler management and eventual automation, threat vectors also increase. Data centre professionals are adding security to their growing list of priorities and beginning to seek solutions that help them identify vulnerabilities and improve response to attacks. Management gateways that consolidate data from multiple devices to support DCIM are emerging as a potential solution. With some modifications, they can identify unsecured ports across the critical infrastructure and provide early warning of denial of service attacks.
Technology integration has been increasing in the data centre space for the last several years as operators seek modular, integrated solutions that can be deployed quickly, scaled easily and operated efficiently. Now, this same philosophy is being applied to data centre development. Speed-to-market is one of the key drivers of the companies developing the bulk of data centre capacity today, and they’ve found the traditional silos between the engineering and construction phases cumbersome and unproductive. As a result, they are embracing a turnkey approach to data centre design and deployment that leverages integrated, modular designs, off-site construction and disciplined project management.
For businesses looking to stay competitive and seamlessly transition to new, cloud based technologies, the strength of their IT infrastructure continues to be the cornerstone of success. With data volumes rapidly rising, IT infrastructures will continue to evolve throughout 2017 to offer faster, more secure and more efficient services needed to meet these new demands. Investment in the right infrastructure – not just a new infrastructure – is essential. It’s therefore vital that a partner with a strong history of data centre operations is involved throughout the system upgrade – from planning and design, to project management and ongoing maintenance and optimisation.
By Amanda McFarlane, Marketing & PR Executive, The DCA
Data Centres North 14 – 15 February, Manchester
The DCA team spent a productive two days at Data Centre North, Emirates Old Trafford. The show comprised of an exhibition, a conference programme and networking event. Our CEO, Steve Hone, chaired sessions on ‘Broadening the Data Centre Offering’ by Mike Kelly from Datacentred, ‘What Brexit means to the Data Centre Sector’ by Emma Fryer from Tech UK and ‘Data Sovereignty Update’ by Mark Bailey from Charles Russell Speechlys, the sessions were well attended and provoked some great questions from the audience. The atmosphere was buzzing after a superbly organised evening dinner with
the networking continuing into the small hours!
The DCA – Data Centre Sector Update Seminar 14 March, London, Excel
The day before Data Centre World (DCW) on 14 March, The DCA Trade Association will be hosting an update seminar. Registration opens at 12.00pm, the programme comprises of four 45 min sessions finishing at 4.30pm.
Sessions include an update on Standards, EU Projects, Public Affairs and a dedicated focus on Workforce Development to help address the growing skills gap in the sector. The update is followed by networking at The Fox pub. Please visit the DCA website for details and to register.
Data Centre World
15 – 16 March, London, Excel
Approximately 10,000 Data Centre professionals attended this event in 2016 and its predicted to be even bigger this year. Once again it is being held at the Excel in London’s Docklands. Data Centre World is co-located with Cloud Expo Europe which this year are in separate halls. These two shows combined make DCW one of the largest and best attended technology events on the world.
One of the highlights this year is sure to be the DCW Live Green Data Centre which was first introduced last year and it promises to be even bigger this year.
DCS Awards - 11 May 2017, London
The DCS Awards are designed to reward the product designers, manufacturers, suppliers and providers operating in data centre arena.
The Awards recognise the achievements of the vendors and their business partners and this year encompass a wider range of project, facilities and information technology award categories designed to address all of the main areas of the datacentre market in Europe.
The editorial staff at Angel Business Communications validate entries and announce the final short list to be forwarded for voting by the readership of the Digitalisation World stable of publications during March and April. The winners will be announced at a gala evening on 11 May at London’s Grange St Paul’s Hotel. Nomination is free of charge and all entries must feature a comprehensive set of supporting material in order to be considered for the voting short-list.
Advertisement: The Data Centre Alliance
EDIE Live 2017
23 – 24 May, NEC, Birmingham
EDIE 2017 is an annual two-day exhibition and conference attracting thousands of energy, sustainability and resource efficiency professionals. The DCA are excited to be sponsoring and attending this event for this first time this year. Our objective is to meet with our members end users, to help promote members and the data centre sector as a whole.
Data Centre World
24 – 25 May 2017, Hong Kong
The DCA are confirmed as event partners for DCW Hong Kong. Our Ambassadors (based in Hong Kong) are Andrew Green and Barry Lewington of PTS Consulting. Andrew is planning to speak on Energy Efficiency case study and to also talk about the value PTS has gained from its collaboration with the DCA.
Data Centre Transformation
11 July 2017, Manchester
The DCA will again be partnering with Data Centre Solutions and Leeds University to organise and host the 2017 Data Centre Transformation Conference on the 11th July 2017. In 2016 we introduced a completely new workshop format which was refreshingly different from the more traditional conference format. This was such a great success that the format will be repeated again this year. Workshops will have a theme that is currently of importance to the industry, each being designed and moderated to ensure they are vendor neutral.
The sessions attended last year proved to be very educational with a high level of delegate interaction. Everyone the DCA spoke with felt they had a voice and
could contribute to the overall discussion. The conference format also allows ample time between the workshop sessions providing a great opportunity for delegates to follow up on the points raised in the sessions and speak to the experts. This year there will be six workshops throughout the day.
DCA Golf Tournament
14 September, Oxfordshire
The DCA Golf Tournament is scheduled to take place on the 14th September 2017 at Heythrop Park, Oxfordshire. It is the first time we have had the tournament at this course so it should be fun. This is a popular event allowing our members and partners to enter a team to play at this picturesque 18 hole course. Look out for our mailers and on our website for information on how to register.
‘Let’s Go Blues’ might be the chant of the St Louis Blues ice hockey fans, and it might also be the name of a song composed by jazz legend Duke Ellington, but, in the data centre world, if there’s a company that best epitomises the energy and enthusiasm behind the simple expression ‘Let’s Go’, and has every reason to make a song and dance about its infrastructure technology innovation, then it has to be Schneider Electric. DCS editor, Philip Alsop, looks back on his
recent visit to the US.
Press trips (as opposed to press conferences) tend to be memorable for all the wrong reasons. Timings go awry, the material of interest is spread rather thinly over an unnecessarily long timeframe, and the social activities can best be described as ‘ordinary’. I still bear the scars of one of my very first press trips – a week long coach ‘crawl’ to an unmemorable town in Germany, to be present at a factory extension opening. Everything ran late, the factory opening, when it eventually happened, took about half an hour, and the rest of the time was spent in various breweries and bars. Not so bad I hear you say. But when you have a thousand and one things to do back at the office, and most of your companions are drinking at the breakfast table...
Even the okay trips leave one distinctly underwhelmed when the time/value sum is calculated. However, the reason journalists still make the effort to go on selected overseas visits is that, just occasionally, there’s a hidden gem uncovered, and the flights, transfers, delays and faceless hotel rooms are a trifle compared to the wealth of (useful) information presented during the various presentations and one-to-one meetings.
My recent visit to the Schneider Electric Innovation Center in O’Fallon, Missouri wasn’t so much a case of discovering a hidden gem – after all, APC by Schneider Electric is a household name in the data centre (or should that be a data centre name in the infrastructure space?) – rather, confirmation that when it comes to addressing the issue of energy management, the company has few, if any, peers.
SVP Innovation and CTO of Schneider Electric’s IT Division, Kevin Brown started proceedings with a fascinating look at some of the technologies and trends that the company believes will matter in the next few years, underlining the fact that, as everyone heads towards the digital world, the progress made will only be as good as the facility foundations and the IT infrastructure that they support.
While many of the subsequent presentations were sneak previews of announcements to be made over the coming months – starting with DCW in March – it’s giving away no secrets to say that major developments focusing on edge computing, cooling, power and modular data centre design are coming your way very soon. And it would be no exaggeration to say that each of these new technologies is little short of game-changing.
Advertisement: Schneider Electric
Regular readers of Digitalisation World’s sister publication, Data Centre Solutions Europe, will be familiar with the many Schneider Electric articles we have carried over the past few years. And many of these have been based on the vendor-neutral white papers that the company produces when it wants to gain exposure for a new technology and/or a new approach to address various critical issues in the data centre. One of the most memorable presentations given during the recent trip was by Victor Avelar, Director and Senior Analyst at Schneider Electric’s Data Center Science Centre who talked through some of the ideas he and his research team are working on right now, with a view to publishing thought-leadership pieces and, eventually, bringing online the best ideas to become a part of the APC by Schneider Electric product portfolio at some stage in the future.
Most ear-catching to your correspondent was Vic’s deceptively simple questioning of why the industry seems to be obsessed with building large data centre facilities. For a future white paper, Vic is researching the idea of replacing a single large data centre with multiple smaller data centres of equal total capacity. A deceptively simple suggestion, and one that needs a few minutes to digest, but Vic is currently doing the maths to see whether the smaller micro data centre solution is cheaper to run overall than a single large facility. What’s more, he’s testing his theory that the distributed model may also deliver levels of built-in resiliency that a single facility could never achieve.
Just when you would have thought that the enthusiasm of the presenters would be waning, and the attention of the journalists would be turned to thoughts of returning home, Schneider Electric produced Senior Power Manager John Gray, who managed to hold his audience spellbound as he talked us through recent UPS developments and the likely role that lithium ion batteries will have to play in the future. John even managed to include a (really) funny engineering joke in his presentation/tour: “What’s the difference between an introvert and an extrovert engineer? The extrovert engineer looks at your shoes when he’s talking to you!”
And it was this combination of energy, enthusiasm and fun that was the hallmark of the visit. A combination that was carried over to the excellent hospitality offered by Schneider Electric, which included taking in a St Louis Blues ice hockey game (hence the intro reference), and the last evening ‘game’ of trying to come up with song titles suitable for the data centre. A pretty extensive list was assembled. Naturally, all the songs would be played by either AC/DC or 2U, with the top tracks including: Racking All Over The World, Chiller, I Can See for Aisles and Aisles, The Long and Winding Code, and Do You Really Want to Earth Me?!
During the visit, I was lucky enough to get some one-to-one time with Kevin Brown, and we talked about the emerging focus on edge computing and how this might impact on the data centre, and how Schneider Electric technology can help end users to optimise their data centre infrastructure in what seems to be a rapidly developing hybrid data centre world. The full interview can be viewed at: http://bit.ly/2mdZams, with an abridged version below.
Q: How would you characterise edge computing?
A: That’s an interesting question. We’ve been working a lot recently in trying to define what exactly is edge, because I don’t know if there’s a clear definition. God forbid that the IT industry comes out with a term that’s not well-defined! The real implication of the edge – not too long ago there were theories that everything would be in giant cloud data centres and there’s probably be only one or two big data centres, perhaps located in California. But what we’ve learnt now, certainly what you’re seeing is that there is, for example, bandwidth constraints that are hitting, whether it’s Netflix streaming videos, it’s actually less expensive for them to put some of the content closer to the customer to avoid some of the bandwidth costs. And we have other applications such as gaming where latency is becoming an issue and then there’s privacy regulation, data sovereignty, government regulations – that are not allowing us to have this original vision of everything being in one or two data centres. There’s no question that it’s going to get distributed out, we think it’s going to take a couple of different forms – you’ll have big centralised cloud data centres, and then you’ll have regional data centres. Where we think there’s a very interesting thing happening is at what we call the localised edge, where what was yesterday server rooms and wiring closets will start turning into data centres – very small ones, maybe micro data centres, but this is turning into a very complex environment.
Q: Is it all central content going back out, or for some of these edge ones, being localised data , with the IOT coming on stream, so instead of bringing it back to the data centre and sending it back out, you’re doing a lot of stuff on the edge because that’s where it is ultimately needed?
A: Our belief is that many applications will be processed at the edge, in one form or another. A simple example is a driverless car. If you have a driverless car, you’re not going to want to be dependent on its connection back to a centralised cloud data centre to make a decision as to whether or not to hit the brakes – that will take too long. It would make sense to build in processing power inside a driverless car. Another one is if you think about remote applications. Another application is oil and gas exploration, where the only internet connection is through a satellite link, there’s a tremendous amount of data that is being created and processed that has to be stored somewhere. Maybe later it can be uploaded for further analysis, when you have more bandwidth. Those are two extreme examples.
If you look at a lot of the enterprises and applications we’re seeing, depending on the application you are using, this will drive those types of decision. It’s not going to be everything’s on the cloud or everything’s on the edge. It’s going to be I’m running this specific type of application for that application I need this type of performance, for that performance I’ll implement the solution that makes sense. And so, we think the future is going to be this hybrid environment. The trend towards edge is really a trend towards a very integrated, hybrid environment. Where you’re going to have a little bit of processing maybe at the localised edge, some in a regionalised data centre and some happening at a centralised cloud data centre. Depending on the application you’re running, you’ll be doing a different mix, different levels of compute, different types of the hybrid side.
What we think is interesting too from a data centre perspective - we’ve done a lot of analysis on this and published a white paper – the math really points you towards, if you have a hybrid environment, it’s becoming clear the weakest link is the localised edge. The localised data centre is becoming the weakest link. I could have a Tier 3 regional data centre, a Tier 3+ type reliability in my big cloud data centres. If is hook that into something that is, say, Tier1, using the Uptime Institute’s methodologies, that actually starts to become the driver for what your reliability is. If our IT systems are becoming more critical, if we’re becoming more dependent on that, if we’re going to take them up to the next level of reliability, we really see that this localised edge, even if there’s very little processing happening but it’s the critical connection point – it’s the weakest link now in the system. We’re seeing our customers realise that and we’re in the process of developing solution to help them meet that objective and address that weakest link.
Q: It’s quite a complicated equation to work out where the data needs to be?
A: Where the data resides will be driven by the application. Most of the CIOs and IT people can pretty quickly analyse where that needs to happen. If I’m running Microsoft Skype I got one set of requirements, if I’m running email I’ve got another set of requirements. Those might be stored in different places. What we think is interesting too, taken from a data centre perspective, is how do I know where are my small edge data centres that I need to invest in, in order to have the biggest impact on the reliability of the overall system?
I’ve seen installations of single rack systems where you have a thousand people dependent on a single rack system. Historically, we wouldn’t have thought about maybe I should have dual power, maybe I should have dual network connectivity. And, what we’re seeing evolve is conversations around, ok it’s not about the size of the data centre it’s almost becoming irrelevant how big it is, but how do I define how mission critical it is? And it becomes much more about how many people are connected to this one local edge site and what’s running there, what’s the business process?
Use the example of a call centre, it becomes very mission critical. For our company, if our call centre goes down and we can’t service our customers, that’s a problem. If we’re in a building like we are today and it’s you and me connecting to our email, and we lose our connection, it’s not business critical. We’re seeing customers doing that type of analysis. It’s no longer just worrying about my big centralised data centres, but when I go to this hybrid environment, how do I determine what are the really mission critical sites and how do I make sure that I’m getting this localised edge at an appropriate level of availability, and that’s very dependent on what is the application, what’s being stored there, what does it look like.
I like to tell stories that people think that going to cloud architecture, if you roll out Office 365, in some ways you’re simplifying your life, but we think also there’s an additional complexity that comes as we have to really start understanding what’s my bandwidth, what am I doing at my localised edge, more so than in the past, we think that’s playing out in the market.
Q: Utilisation. Centralising everything, consolidation is attractive as it leads to better utilisation rates of infrastructure. Is there a danger that, by moving back to the edge, is there a danger that there’ll be under use of IT resources again?
A: It’s always a risk. There’s a lot of tools in place to help make sure were getting the best utilisation we can. Whether it’s a hyperconverged, system, we’re seeing customers ask for even half rack size designs, and a lot of this is because I believe they’re becoming very application specific. So, in the past we’d just buy a generic piece of hardware and then run an application on it, and whether it used it all or not you didn’t know. And then we went to virtualising, we’ll take all these applications and run them on it, and that gives you flexibility. What’s happening now, that’s getting really abstracted out. You define exactly what the applications are that will be running on this piece of equipment, you can pretty much size it effectively. We’ll become much more effective in our use of energy because of that and it’s because we’re going to start designing things with the application in mind.
In the past, when you had this poor utilisation, whether it was because of technology limitations, because virtualisation wasn’t in place, I don’t think we had this mind set of being very application specific and designing to that. Holistically, what’s my application, what’s the IT I need to support that, and what’s the physical infrastructure that’s behind that. I see much more of those types of conversation happening now than we did in the past and I expect that to continue and therefore we’ll see much more specialised data centres, micro data centres being implemented.
Q: In terms of Schneider’s involvement, do you expect customers to come to you saying they want a micro data centre or will you be involved in helping them to decide their data centre strategy?
A: I think we have a role to play in terms of helping the customer analyse this. We invest in research and publishing white papers and putting some of these ideas out, and I think that pays off. I’ve had conversations with CIOs about what are you doing
at your localised edge, and how are you thinking about these different issues. And of course as part of that we’re designing solutions to address that, for them.
We didn’t talk about cybersecurity, but when you look at your localised edge and you look at the traditional server room and wiring closet, I’d argue that most of those may not have the same level of security that you’d see in a typical colo or regional data centre. Physical security is the first step towards cybersecurity.
So, how do we provide solutions that are more robust, or simpler for our customers to deploy, easier for them to get the level of security and management tools that they need. For us, the challenge becomes all of these are becoming much more integrated and much more specific, and we’re adjusting the way we do engineering, the way we build our supply chain, all of that is changing very rapidly for us in order to really help our customers to meet those challenges they have.
So, it’s a combination of, one, coming in and talking about what the problem is, but, two, on the back end we’re making a lot of changes in order to adjust to this new reality that we’re facing.
In recent years, there has been a rush from enterprises to migrate to the public cloud. With the well-documented benefits of the cloud – namely increased efficiency, flexibility and scalability – organisations from almost every single industry have made a substantial investment in cloud technology.
By Sesh Sayani, Gigamon Director of Product Management, Cloud Solutions.
In addition to better performance and security, companies have been able to break into new markets, revolutionise their industry and secure increased revenues through the cloud – take Airbnb and the hospitality industry as a good example of this.
However, rather than performance, security and business opportunity, what are the key drivers motivating organisations to migrate to public cloud in the first place? In a word, “Cost”. In a few more words, total cost of ownership (TCO) of a software application.
To fully understand the cost, one should consider what it takes to operate a traditional data centre.
There’s hardware, power, cooling, physical space, wiring, staffing, maintenance, disaster recovery. The list goes on, and it’s a potentially expensive list. When looking at what it takes to get a single application up and running, it doesn’t take long to realise that the requirements are no different than those of running a data centre. And the expense isn’t limited to buying a software package or hiring programmers to develop an application. Today, most companies have finite budgets that usually stretch across a number of siloed teams — security, networking, storage, virtualisation — each of whom work independently of one another and don’t understand the actual TCO of an application. For example, the network team doesn’t realise the cost to the security team, and vice versa. These silos prevent everyone from seeing the complete picture and being able to grasp the many advantages that public cloud brings.
The public cloud provides companies with a simple way, using common interfaces and tools, to deploy virtualised versions of what they would normally run as physical infrastructure to support their applications. By pushing those functions to the cloud provider to handle, companies can deploy applications much faster and, in most cases, for a lower cost.
A significant force in cloud computing today is the rapid evolution of Infrastructure-as-a-Service (IaaS). With benefits such as economies of scale, usage-based pricing, agility, and elasticity, there is a compelling argument for companies to evaluate the network and storage infrastructure offered by a public cloud provider. Unlike Software-as-a-Service (SaaS), where application ownership and security of information is the responsibility of the SaaS provider, an IaaS places the responsibility of application and information security on the tenant.
Advertisement: Schneider Electric
However, as with many things that sound too good to be true, there are often constraints holding enterprises back from running many mission-critical apps in the public cloud over an IaaS. The technical reasons behind these concerns often boil down to the inability to view traffic and information traversing the public cloud, security concerns to run critical apps due to said lack of accessibility, lack of sufficient tools in the public cloud (as opposed to on-premise deployment tools), and the varying backhaul costs from the IaaS provider to an enterprise.
Even the big players in the industry, such as Amazon Web Services (AWS) – the recognised leader in public cloud IaaS, offering more than 70 cloud computing services and with more than a million customers in 190 countries worldwide – can find it a challenge to offer fully pervasive visibility to customers.
AWS holds a 45 percent share of the public cloud infrastructure market, serving the needs of small businesses and Fortune 500 giants. Netflix, for example, is nearly 100 percent inside the AWS public cloud today, with AWS hosting the entertainment company’s corporate applications and the entirety of its streaming operations.
Rightfully so, customers are concerned about security. With cyber threats increasing in frequency and severity, organisations need constant insight and visibility across its infrastructure to manage workloads and applications as it scales to the cloud, as well as visibility into information crossing the public cloud.
The same issue – limited visibility – that has always existed on-site servers persists in the public cloud. There is little visibility into most traffic that stays on the cloud; and without visibility, tools for intrusion detection, malware analysis, data loss prevention, etc., can’t perform optimally or track threats that propagate laterally. What’s more, companies are also finding it difficult to ensure that everything in the public cloud meets their own specific compliance requirements.
Organisations need to ensure that they are ahead of the curve when it comes to protecting the integrity of their data, while standardising their solutions across their cloud and on premise solutions. No matter where its infrastructure is, be it on premise or in the cloud, the company’s cloud architects, security operations, and DevOps teams need the same deep visibility to perform content inspection and secure their mission-critical workloads. As well as a standardised means to access network traffic in an AWS VPC as well as send traffic from Amazon EC2’s to multiple tools on demand.
With pervasive visibility, organisations can uncover blind spots to help detect threats in the infrastructure. Indeed, visibility to data traversing public clouds is critical for continuous monitoring and security of these application workloads. Companies need to be able to have their security operations and network operations teams centrally secure and monitor their on-premise and/or off-premise workloads, thereby providing comprehensive visibility regardless of workload location.
With the continuing demand to adopt cloud infrastructure services, and AWS making it easier for companies to easily and securely do just this, it’s safe to say that the migration to cloud isn’t going to abate anytime soon. However, companies need to be wary when protecting their data, as a motivated attacker can get into any network and lie dormant until they feel it is the right time to strike. Without pervasive visibility into said networks, organisations will be flying blind, and companies as well as their customers will pay the price. Active visibility into data-in-motion network traffic can enable stronger security and superior performance within networks, because at the end of the day, if companies can’t see their network, how can they expect to protect it?
It is time for enterprises to start looking at how they are going to approach 2017. In order to look forward, a retrospective view over 2016 is required, to see what key learnings and insights can help shape and future-proof their business. Digital transformation has moved from the territory of the young start-up to a necessity for
all businesses, regardless of industry.
By John Newton, CTO and Co-founder of Alfresco Software Ltd.
The Gartner CIO Agenda Report 2016 revealed that digitalisation is no longer an innovative trend, but a core competency. Companies face demands from all corners: employees increasingly desire flexible working and remote access, suppliers want systems that facilitate collaboration and agility, and the customer wants a system that performs above and beyond their expectations. In order to respond to this, a business has to examine its own processes closely. Measures must be implemented that encourage seamless transitions, anticipate user’s needs and automatically incorporate the latest updates
The aim of any digital transformation should be to maximise ‘digital flow’. This can be defined as a state of maximum productivity in which people, processes, and information are connected quickly, intuitively, and effortlessly. Whilst in a state of flow, silos are disrupted and different entities work together in an interconnected way that is mutually beneficial. As a result, digitisation creates new information that moves differently and at an increased rate to old analogue components. As well as facilitating and encouraging feedback from both customers and suppliers, which leads to faster and more informed innovation to give a measurable advantage over competitors.
Many factors are involved in achieving flow, and the process is constantly changing. However, the process of creating a business fit for the future starts in a simple place: how the decision-makers think. There are three components to this approach, which can broadly be described as Design Thinking, Platform Thinking and Open Thinking.
Design Thinking is user-centric, where processes respond immediately to the needs of the user, delivering content when and where it’s required. It anticipates what the user wants and provides specific solutions to problems that the user is trying to solve. Platform Thinking refers to the use of a digital platform to manage, deliver, integrate and automate processes, in order to make content available in the best form possible, and importantly, is scalable for change and growth.
Open Thinking is a broader way of talking about open source technology and the open standards that come with it. Collaborators, even from outside the business, can constantly examine, change and update software, creating the best version possible that is always evolving.
Subsequently, there are myriad of benefits to customers, employees, and suppliers that come when flow is achieved. As well as an overall improvement in efficiency, because the processes remove friction from all interactions, enabling greater automation and collaboration. The benefits include:
Clients and partners will feel that their demands and input are closely considered with the existence of a continuous loop of feedback. Meaning that organisations can gain insight and analyse behaviour in order to modify and improve the experience and tailor it, and clients will feel a greater transparency into processes. This trust between clients and businesses can be difficult to achieve, but once attained, it can be easily maintained with this flexible and open approach. The user should always be at the centre of what is trying to be achieved.
Advertisement: Schneider Electric
As a result of bringing together connections across silos and seemingly disparate groups, a more organic platform for collaboration between designers, suppliers, employees and customers is achieved. Everyone has access to the content and contacts they need, when they need it, wherever they are. This enables constant and reactive innovation, and while some ideas will fail, these insights must be embraced, recovered from quickly, and used to improve future iterations. Helping a business withstand the vicissitude within the industry, giving them an edge over competitors.
Advancing flow provides fertile ground for the creation of new products and revenue streams through the aforementioned establishment of connections. In fact, when you implement a digital platform for the end-to-end integration of information, you’re already putting the building blocks into place. This is because you have content, information, people, and processes interacting in a single, seamless system across every business role and function. Teams will be more aligned and departments work closer together, cutting duplicate processes, and enabling time for creativity and business development.
Customers are at the heart of every business and in the digital marketplace, speed and efficiency are more important than ever. Enterprises pull ahead of their competitors by working smarter, innovating better, and getting to market faster. This is exactly what flow enables. Customers expect a quick response and turnaround from a business, so it is advantageous to minimise the time it takes to locate and utilise relevant content, in order to provide a consistent, high quality service. As well as needing solutions that have a simple interface so that they can engage with content and seamlessly participate in key business processes.
Most companies are currently in a state of digital transformation, and the advantages of this have been well-documented. However, going the extra step to achieve digital flow benefits all parties involved (employees, suppliers and customers) by propelling innovation, facilitating a smoother working process and creating connections. A flexible, scalable, and open approach that intelligently activates content and processes is essential, delivering the fastest path for people to interact with information and for companies to respond to changing business needs. Ultimately this makes a business more fit to withstand the unseen challenges that 2017 brings.
As businesses increasingly embrace digitalisation, many are finding they have built a tangled suite of IT tools in an effort to properly manage their networks, apps, and components. This has led many of these organisations to form independent data islands. The result is a one-dimensional and incomplete view of their IT systems.
By Sridhar Iyengar, Vice President, ManageEngine.
To solve this issue and continue building up their IT systems effectively, organisations are quickly turning to IT operational analytics tools that will allow them to analyse data from multiple sources and spot trends quickly, enabling effective decision-making.
Organizations across the globe are seeking self-service analytics tools as their top choice for avoiding more costly or complicated solutions that do not provide easy user access to important data.
Here are my top five tips on how any organization can be successful using self-service analytics.
1. Make data accessible to a wide range of employees
Owing to their often high levels of complexity, traditional business intelligence tools have always been relegated to the hands of select data experts, meaning that decision-making capabilities were limited to just a privileged few. Fortunately, that is no longer the case. In the modern world, data is an integral part of any business–across nearly every sector imaginable–and so users need to be able to access it on a daily basis so they can make decisions on their own.
2. Empower all users
Long gone are the days when users had to wait patiently for the IT department to furnish a report or a chart to get the information they needed. IT and business users alike are no longer willing to depend on other sources to fulfil their reporting requirements, and the majority prefer to do it on their own. Self-service business intelligence tools provide greater flexibility in this regard and allow users to quickly carry out a wide range of important tasks, including creating personalised reports, acquiring real-time insight on the data they need, and carrying out necessary action.
Advertisement: Riello UPS
3. Embrace personalisation
Different teams will all likely have very different reporting needs. With this in mind, self-service reports can provide a huge boost to productivity, as they can be personalised based on the individual requirements of staff. By enabling a fuller level of personalisation, they can also provide more insight into why certain strategies are more likely to work than others.
4. Enable quick and easy on-demand reporting
Ad-hoc reports are more popular than standard reports, as they provide answers to specific questions. With this in mind, organisations need to be able to create ad-hoc reports instantly and without a minute of delay. With self-service reporting, enterprises can finally assist users in easily accessing and sharing any pertinent pieces of information they need to have access to.
5. Good visuals can go a long way
A visually driven, intuitive user interface adds more context to data, and allows users to instantly view, interpret, and analyse their information. Users can now create reports and dashboards quickly and easily using intuitive and powerful visualisation tools such as charts, widgets, KPI metrics, pivot tables, and much more.
Self-service tools allow them to visually slice and dice data, drill down into the gritty details, and even change appearances with different chart types and a wide range of predefined templates for everyone.
To initiate a move towards Security Assurance, start by asking the relevant people within your organisation: Is there an attacker in your network right now? How would you know?
By Steve Schick, Sr. Director of Educational Communications at LightCyber.
Here’s the scenario. An intruder has been lingering in your network for eight months. Since first gaining a foothold by compromising the HR Director’s computer, the intruder has been able to quietly scope out the network and now has an excellent picture of the servers, data stores, cloud data centres, users, networking equipment and IP-enabled monitoring systems in the labs and in the manufacturing centre. From his initial point of control, the attacker is now entrenched in five other computers, including one that gives him top admin privileges.
Using his strong position, the attacker has read a number of interesting and highly confidential documents and delved into some important data. The intruder has reviewed the company’s revised business plan, a product strategy document and two-year roadmap presentation, perused the customer database, looked at entities in the accounts payable system and gained access to the development server that technologists use to collaborate in creating a brand new product.
To date, the intruder has done nothing other than to explore, observe and become more deeply entrenched in the network. Like most enterprises, this one has no idea that an attacker is present and has been lurking for months. What’s his motivation? Will he sell confidential information to competitors or unscrupulous investors? Will he threaten to publish all the company’s secrets unless a substantial payment is made? Will he manipulate the accounts payable system just enough to avoid detection for months while siphoning money to various bank accounts? Is there another extortion at play? Can he secretly manipulate the new product for extreme leverage after it has been released? What is his next move?
Unfortunately, this scenario is more reality than fiction. Currently, attackers are entrenched and hidden in numerous networks. Rogue or malicious insiders are also at work. Only a handful of enterprises can detect an attacker currently active on their networks. The attacker’s success is virtually guaranteed.
Today, after years of unrelenting data breaches as well as deeply disturbing incidents that signal what else might be possible, executives and those with a fiduciary or regulatory responsibility for an organisation must be able to know if an attacker is present in their network. Boards of Directors, CEOs, CIOs, and others must ask the question of their security or IT teams: is there an active attacker in our network? How would we know? What is our degree of confidence?
Of course, the ability to find an attacker—whether an insider or an externally-based bad actor—early is crucial. Organisations need to detect an attacker before theft or damage can occur. Most do not have this capability today, but it is rapidly becoming a must-have.
The opposite is also becoming a necessity. There is great value in knowing that your network is free from active attackers. In being able to answer this question with confidence, one can satisfy a growing requirement for boards and top executives—to attest that the network is safe and have a strong level of confidence that if an attacker did penetrate the network they could be found quickly and accurately. Security Assurance is something that those with corporate or organisational responsibility should demand and that security teams should be able to provide. The basis should be complete visibility of the internal network with the ability to distinguish the operational activities an attacker must perform once they have a foothold in a network.
Seeing an in-progress attack is the most important ability, but knowing the network is safe is also essential. In 2017, enterprises must start delivering Security Assurance. (We have recently added a report to our Magna platform designed to attest Security Assurance, which is well-suited for executives and board members).
Advertisement: Schneider Electric
Besides the need to protect the enterprise, executives will also soon be held accountable for what they have done to safeguard customer data. The concept is inherent in the General Data Protection Regulation (GDPR) which applies to any organisation with personal information of consumers residing in the EU. The SEC, FTC and various Attorney General Offices are also starting to take a hard look at whether enterprises have taken reasonable steps to protect the data they have been entrusted with and whether they are protecting the integrity of their business, particularly if shareholder value is needlessly put at risk.
To initiate a move towards Security Assurance, start by asking the relevant people within your organisation: Is there an attacker in your network right now? How would you know?
DW talks to Paul Smethurst, Managing Director of Hillstone Products, to understand the research and subsequent development work that has led to the recent launch of the new GENSET_loadbank range, a permanent load bank solution, – helping to reduce energy costs and increase reliability in the data centre.
Q: Please can you give us some background on Hillstone’s involvement in the data centre industry/with load banks over the past few years?
A: I can pin point four important events fundamental to establishing Hillstone at the forefront of load banks that are used today worldwide in the datacentre industry.
The first dates back ten years to 2007 when we were approached by British Telecom who wanted to rent a number of rack mounted server simulator load banks. We found a clean sheet of paper and a pencil to design the 3RM, a 3kW Rack Mounted load bank.
The second significant development was in 2009 with a request from Telecity to purchase a total of 2000kW of rack load banks for testing a new large empty white space data hall. Our design team created the 6RM, a single phase and three phase server simulator with granular heat dissipation and adjustable delta T from 5oC to 20oC. Then, to overcome not having any available cabinets, we designed the 20kW mini-tower
In 2010 when the BBC’s new home at Media City in Salford needed load banks for IST commissioning, Hillstone were ideally placed 20 minutes down the road to heat load the studios and datacentre. The project was delivered with support from our datacentre testing team to Bovis Lend Lease & NG Baileys.
The fourth was also down to being ideally placed, but this time overseas in Dubai, where the previous 12 years in Middle East sales travel resulted in establishing Hillstone Middle East in 2012. Our appointment by Blackberry and regional telecom operators Etisalat, DU, Mobily & STC for IST services allowed datacentres in Dubai and Saudi Arabia to also obtain Uptime Certification.
Q: And what prompted the recent research into why a datacentre needs a permanent load bank?
A: I shared a power workshop with Ian Bitterlin at the Datacentre Transformed Conference in Manchester July 2016 and it was his matter of fact statement that every datacentre needs a load bank which prompted my questioning of “why?“ and also how can this increase my sales of load banks?
Q: And what did the research reveal?
A: Of course, Ian was correct! and the research has increased sales, but the most significant findings are that datacentres’ can reduce energy costs and improve reliability by installing a load bank, alongside our wider understanding of fuel being the single point of failure and the impact of running on a low IT load.
Q: And this research has led to the creation of the GENSET_loadbank range?
A: From a commercial point of view the research has allowed the re-branding of our load bank ranges with a focus to application specific branding. Therefore the GENSET_loadbank range is for permanently installed load banks used with generator systems.
The important features within the GENSET_loadbank range are the ability to allow auto-regulation of load when the genset is running and also to allow manual control during planned maintenance, which are both very important datacentre requirements.
Q: This range is designed to prevent breaches in Service Level Agreements (SLAs) and warranty conditions?
A: There is nothing new in warranty conditions for running a generator and it is not specific to datacentres to require the genset to run at load of at least 40%. So unless the datacentre has a permanent load bank running on a genset to compensate for low IT load the facility will either breach the SLA or include a high premium for additional maintenance and servicing which in turns reduces reliability and uptime.
Q: And to overcome fuel being the singular point of failure in the data centre?
A: This was a great find of the research and really it is a very simple concept in so much that with a permanently installed load bank the genset will burn fuel from both the day tanks and the belly tank every time it is running.
So if the Standard Operating Procedure ( SOP ) is for a weekly test to make sure the genset battery will start, the genset, then the load bank, will pull fuel though the fuel system and the generator be exercised under load.
This will therefore cycle the fuel and prevent bad fuel materialising with or without a fuel polishing system.
Q: And this is achieved by maintaining best practice expected in mission critical data centre infrastructure?
A: Yes. Because datacentres are the factories for tomorrow’s world they have been designed with the importance of mission critical power systems which must require the implementation of best practice for operation and maintenance procedures. So, with a genset having a basic requirement of running on a load of at least 40%, and today’s datacentres having low IT load, the only best practice we can implement is to install a permanent load bank on the generator system.
Q: And this therefore prevents the costs of wet stacking?
A: Wet stacking is the expensive mistake of running the genset on low load, so with less than 40% load the engine will not reach the optimum (or design ) running temperature. This results in un-burnt fuel deposits and carbon build up in the engines cylinders. Whilst normal genset users may solve this problem during annual load bank maintenance, the datacentre genset cannot chance the risk of failure or the extra costs of downtime for extended servicing, such as engine strip down
Q: And how does the GENSET_loadbank range help to increase load bank reliability?
A: Reliability is a function of uptime which is obviously the inverse of downtime. If the weekly ‘genset start STOP’ validates the fuel supply and the genset is running as per warranty and SLA conditions and if the maintenance downtime on the genset is reduced all by installing a load bank on generator system, then the datacentre increases its reliability.
Q: And how does it help reduce energy costs?
A: Having a permanently installed load bank allows energy efficiencies in the mechanical cooling systems to be implemented effectively and responsively to low IT load. Therefore, the reliance of using the full mechanical load to contribute to the loading on the genset is no longer needed and the annual savings in power usage will be reflected against the datacentre PUE.
Q: And what is the typical ROI?
A: To measure the ROI on the load bank could be presented in hours or days rather than months or years because if the genset fails so does the datacentre. An alternative view point could be in the reduction of the required maintenance or the ability to remove SLA penalty breaches or an overall reduction in SLA costs.
In monetary terms, a load bank for a 2500kVA datacentre genset system should cost less than £10,000, so a reduction of £5000 per year in load bank rental costs, maintenance service visit costs and energy saving costs will easily give an ROI of less than 1 year.
Finally, the GENSET_loadbank range works by delivering dual automatic and manual operation, and features open software that integrates with both BMS and DCIM?
The ability to integrate the load bank easily and without cost implication to the BMS or DCIM is another user focused feature of the GENSET_loadbank range. As responsible engineers, Hillstone want to lead the industry forward with low cost flexible load bank software integration that is based on simple open source principles and development for tomorrow’s modern datacentre.
Over the past twelve months we’ve seen an explosion of data, an increase in processing it and a move towards information activism. This means the number of employees actively able to work with – and master – the huge amounts of information available, such as data scientists, application developers, and business analysts, have become a valuable entity.
By Dan Sommer, Senior Director, Qlik.
Unfortunately, however, there still aren’t enough people with the expertise to handle the ever-increasing, vast levels of data and computing. You would assume, with all the information currently being produced and held by businesses, that 2017 would see us in a new digital era of facts. But, without the right number of specialists to consume and analyse it, there’s a gap in resources. Data is, unfortunately, growing faster than our ability to make use of it. For many business leaders then, this means a reliance on gut instinct to make even the most important decisions. Unable to hone in on the most important insights, they’re presented with multiple – and sometimes conflicting – data points, so the most important ones seem unreliable.
The situation needs to change. Yes, that will mean up skilling more data scientists in 2017, but there will be a greater focus on empowering more people more broadly. That will go beyond information activists and towards providing more people with the tools and training to increase data literacy. Just as reading and writing skills needed to move beyond scholars 100 years ago, data literacy will become one of the most important business skills for any member of staff.
So, what will change to see culture-wide data literacy become a reality? Here are my predictions:
1. Combinations of data
Big data will become less about size and more about combinations. With more fragmentation of data and most of it created externally in the cloud, there will be a cost impact to hoarding data without a clear purpose. That means we’ll move towards a model where businesses have to quickly combine their big data with small data so they can gain insights and context to get value from it as quickly as possible. Combining data will also shine a light on false information more easily, improving data accuracy as well as understanding.
Advertisement: Schneider Electric
2. Hybrid thinking
In 2017, hybrid cloud and multi-platform will emerge as the primary model for data analytics. Because of where data is generated, ease of getting started, and its ability to scale, we’re now seeing an accelerated move to cloud. But one cloud is not enough, because the data and workloads won’t be in one platform. In addition, data gravity also means that on premise has long staying power. Hybrid and multi-environment will emerge as the dominant model, meaning workloads and publishing will happen across cloud and on-premise.
3. Self-service for all – Freemium is the new normal, so 2017 will be the year users have easier access to their analytics. More and more data visualization tools are available at low cost, or even for free, so some form of analytics will become accessible across the workforce. With more people beginning their analytics journey, data literacy rates will naturally increase — more people will know what they’re looking at and what it means for their organisation. That means information activism will rise too.
4. Scale-up
Much a result of its own success, user-driven data discovery from two years ago has become today’s enterprise-wide BI. In 2017, this will evolve to replace archaic reporting-first platforms. As modern BI becomes the new reference architecture, it will open more self-service data analysis to more people. It also puts different requirements on the back end for scale, performance, governance, and security.
5. Advancing analytics
In 2017, the focus will shift from “advanced analytics” to “advancing analytics.” Advanced analytics is critical, but the creation of the models, as well as the governance and curation of them, is dependent on highly-skilled experts. However, many more should be able to benefit from those models once they are created, meaning that they can be brought into self-service tools. In addition, analytics can be advanced by increased intelligence being embedded into software, removing complexity and chaperoning insights. But the analytical journey shouldn’t be a black box or too prescriptive. There is a lot of hype around “artificial intelligence,” but it will often serve best as an augmentation rather than replacement of human analysis because it’s equally important to keep asking the right questions as it is to provide the answers.
6. Visualization as a concept will move from analysis-only to the whole information supply chain Visualization will become a strong component in unified hubs that take a visual approach to information asset management, as well as visual self-service data preparation, underpinning the actual visual analysis. Furthermore, progress will be made in having visualization as a means to communicate our findings. The net effect of this is increased numbers of users doing more in the data supply chain.
7. Focus will shift to custom analytic apps and analytics in the app
Everyone won’t - and cannot be – both a producer and a consumer of apps. But they should be able to explore their own data. Data literacy will therefore benefit from analytics meeting people where they are, with applications developed to support them in their own context and situation, as well as the analytics tools we use when setting out to do some data analysis. As such, open, extensible tools that can be easily customised and contextualised by application and web developers will make further headway.
These trends lay the foundation for increased levels of not just information activism, but also data literacy. After all, new platforms and technologies that can catch “the other half” (i.e., less skilled information workers and operational workers on the go) will help usher us into an era where the right data becomes connected with people and their ideas — that’s going to close the chasm between the levels of data we have available and our ability to garner insights from it. Which, let’s face it, is what we need to put us on the path toward a more enlightened, information-driven, and fact-based era.
With the cloud market becoming increasingly crowded, developers are under mounting pressure to create more innovative solutions and reduce their costs. It’s little wonder then that APIs (Application Program Interfaces), which can drastically reduce development time, have become one of the most prized tools in any developer’s arsenal. However, the benefits are tempered with an increased risk of cyber attacks.
By Sam Rehman, CTO, Arxan Technologies.
An API is essentially a set of instructions or routines to complete a specific task, or interact with another system, from servers to other applications. Because APIs are able to perform tasks such as retrieving data or initiating other processes, developers can integrate different APIs into their software to complete complex tasks.
Where this has become a real game-changer for the industry is, rather than spending countless hours writing every aspect of the software from scratch, developers can simply pick from an increasingly large selection of best-of-breed APIs developed by specialists, and plug them straight in. This transforms the development process from a time-intensive grind to something more akin to building with Lego.
Using ready-made components enables developers to considerably reduce costs and time-to-market, and perhaps more importantly also frees up time and resources to pour into the innovative and unique features that will cause their application to stand out.
APIs are so useful that some of the world’s largest companies are now making the majority of their revenue through them. Research from the Harvard Business Review found that Salesforce generates around half of its revenue through APIs, while Expedia uses them to create almost 90 per cent of its income. Alongside the big players are an endless selection of specialists, meaning that developers can access high quality APIs for almost any task.
Advertisement: Schneider Electric
Some of the most useful examples for cloud developers are APIs for Platform-as-a-Service that can integrate with databases, portals, and messaging components, and APIs for Software-as-a-Service that connect the application layer to the IT infrastructure. Additionally, Infrastructure-as-a-Service APIs can help with tasks such as quickly provisioning or de-provisioning cloud resources, or managing workload management and network configurations.
With so many benefits, APIs do come with downsides however - exposing the cloud to a new attack vector that can be used to access the back-end server the cloud is communicating with.
The weakness is the simple authentication that is widely used by most API Management Solutions to confirm that the client app on a device is genuine and has been authorized to utilise server assets. Typically, this is done using a simple challenge-response exchange, as the client app tries to connect to the API server. This exchange is usually a cryptographic operation, which means that the mobile client generally contains a secret key for an asymmetric cipher like RSA or ECC.
If attackers are able to break the application’s security and decompile its code, they can root out the encryption keys. Any application that is available for download is particularly vulnerable to this, as they can be attacked indefinitely until a weakness is found.
Once the keys have been found, attackers can use them to trick the system into recognizing them as a legitimate client and enabling them to access anything the API was authorised to connect with. An API that accesses data on the back end server for the cloud application, for example, could provide attackers with the ability to break in and steal sensitive data or perform other malicious activity.
The vulnerability introduced by APIs can be overcome by taking extra security measures alongside challenge-response based authentication. The most effective approach is to centre defences on protecting the cryptographic keys.
White-box cryptography is a particularly strong method for securely hiding cryptographic keys, even if a hacker has full access to the software. Using this technique, the original key material is converted to a new representation in a one-way, non-reversible function. This new key format can only be used by the associated white-box cryptographic software, preventing the hacker from finding it and using it for the challenge-response.
However, white-box cryptography can still be circumvented if the hacker is able to decompile the original application and modify the app or lift out the entire white-box software package, and include it in their cloned version of the application.
Particularly relentless attackers can be stopped with anti-tampering techniques that prevent code-lifting attacks or the app being tampered with. Anti-tamper techniques, which also have RASP (Runtime Application Self-Protection) built-in, can respond to runtime attacks with customisable actions and notify the app owner that app is being modified.
By putting security measures like these in place to protect the cryptographic keys, developers can ensure APIs are able to communicate safely with networks and other applications. With the inherent security flaws taken care of, cloud software can take full advantage of the benefits of APIs without exposing themselves or their clients to attack.
It’s getting hard to remember what it was like before we carried around mini supercomputers in our pockets! Nowadays, practically everyone is connected to the internet at home, in the office and on the move. This has introduced fantastic opportunities for businesses and employees to operate smarter.
David Venable from Masergy explains.
Employees have the opportunity to use their own personal devices for work purposes. The thought behind this is that employees are already familiar with their own devices and already have them on hand at all times. In theory, your team members will be more productive and happier at work with a BYOD scheme in place. BYOD is generally a good thing, but it is not without its challenges and concerns. Like any new development, the risks need to be evaluated.
Firstly, BYOD has been around for a while, however, there are no universal set of guidelines for employers and employees to work too. But there are some best practices that security experts recommend.
The areas of highest concern within the enterprise are: data leakage and loss, unauthorised access to company data and systems, downloading unsafe apps or content and malware.
According to the Crowd Research Partners BYOD & Mobile Security 2016 Spotlight Report, it finds that: 72% of respondents are concerned with data leakage and loss, 56% with unauthorised access to company data and systems, 52% with downloading unsafe apps or content by users and 52% with malware.
My personal view is that the most pressing concerns with BYOD are those of network and security stability. Keeping your company’s private and sensitive data secure is one of your IT department’s biggest responsibilities and BYOD adds a new dimension to this ongoing struggle. As the workforce becomes more reliant on mobile devices, the floodgates of data leakage and threats open up, resulting in an even greater reliance on the IT department to secure mobile devices.
Mobile phones and tablets are the weakest link when it comes to security and are prone to attacks. They also require regular patch updates, with the responsibility for these falling on employees.That leaves the impetus on organisations to implement policies and procedures that help employees keep their devices secure.
Advertisement: Gigamon - An intro to the first pervasive Visibility Platform into Hybrid, Private and Public Cloud.
Now, with employees carrying their devices all of the time, this means that these devices also have access to their employer’s network and secure data – all the time. This means that a lost or stolen device is a potential threat. It also means that any malicious program hiding on a personal device now also has access to your company’s network and data. All it takes is one infected device to compromise the integrity of your network and data security. Through BYOD, CIOs can have less control over the mobile devices used in their organisation, which ultimately means they are more vulnerable to attacks.
The Crowd Research Partners research also mentioned the threat of employees downloading mobile apps – this I agree with. Employees can use these apps to connect to external Wi-Fi spots without having the correct security protocols in place. This creates serious security holes that can be exploited by hackers.
Coupled with the fact that your employees might not have anti-virus protection or have an up to date firewall present on their mobile devices, means they are more vulnerable to attacks. To prevent viruses from spreading, it is important that there is a gatekeeper like a VPN, which grants access by verifying that the data being transferred from the mobile device to your IT network is encrypted and permitted.
Here are steps organisations can take to ensure that information security won’t be needlessly impaired by the use of employees’ devices:
You must create a strategy for BYOD with a business case and a goal statement. As technology continues to advance and change the way we live and work, building a smart, flexible mobile strategy will allow companies to explore innovative ways to empower their workforce and drive greater productivity.
In addition, you must secure devices and apps by implementing an MDM solution, or other container-focused management utilities that will greatly help your organisation in managing and securing the devices. The policies on the devices or within managed containers should be defined by the risk assessment.
You can also complement end-user and administrative security with more extensive network safety: the creation of multiple virtual routing and forwarding (VRF) and virtual switching instance (VSI) environments on the same physical infrastructure allows separate virtual LAN (VLANs) for traffic segregation, i.e. trusted vs untrusted traffic. This way, a BYOD smartphone can be contained on a VRF for user-owned devices, and any malware that may intrude upon it can be kept from infecting the most trusted environment that’s reserved for corporate-issued systems.
Making BYOD a success requires organisations to intelligently detect nefarious activity, like APTs, that enter the corporate environment courtesy of user-owned smartphones and tablets. Network behavioural analysis and machine learning solutions that monitor network activity and adapt to changing threat conditions are a wise investment in supporting BYOD initiatives.
With data loss, unauthorised access and malware are just some of the concerns around BYOD, you must make sure all devices are registered, device-level encryption is installed and user policies are established. Educating employees on how to protect their devices and ensuring they are configured in line with security policies ensures that even the basic security precautions are adopted.
One thing is for sure: There’s no time to waste getting more done. Citing BYOD as a driver of innovation, as well as device and service cost savings, Gartner has predicted that by next year, half of all businesses will require workers to use a personal device for work.
As we start the New Year, inevitably our focus moves towards turnover and budgets for IT investment for 2017. When beginning to think about this, I would urge business leaders to look at IT as an asset and an enabler.
By Eduardo Cruz, VP UK & Ireland, OutSystems.
IT should deliver competitive advantage for the organisation rather than as a sunk cost. Many organisations are now looking to digitally transform their business. They are looking at how customers can interact not only via digital channels but also to start to self-serve. That said, the phrase ‘digital transformation’ often sounds incredibly daunting, especially when budgets are tight. Thanks to low-code platforms like OutSystems however, digitisation doesn’t need to break the bank. It can actually be achieved cost effectively and quickly.
Put simply, implementing digital transformation will see long-term reduced IT costs, reduced costs of doing business, faster time to productivity in core areas, as well as increased business growth and greater agility within IT to meet day-to-day business needs. Not convinced? Let me share with you a couple of examples:
When a global provider of image-processing equipment, Ricoh, partnered with OutSystems to replace several of its disjointed applications, it did more than just break down its siloed data stream. The company was able to avoid additional software costs, reduce its hardware costs, and increase developer productivity. Not only that, the company also achieved an ROI of 253 percent, while racking up annual savings of $131,967. Aside from our own clients, the UK government provides us with some fantastic examples of successful digital transformation according to the figures jointly provided by Crown Commercial Service (CCS) and Government Digital Service (GDS). Against a 2009/10 baseline the amount saved from transformation services was £891 million in 2012/13, increasing to £978 million in 2013/14. With last year’s figures added onto this, the combined total saved over these three years is £3.56 billion. This is a direct result of work done across government by departmental teams who have been building digital services and making better use of technology.
Another misconception that surrounds digital transformation is the false assumption that current business processes must be overhauled all at once, when in fact it is perfectly conceivable to transform parts of your organisation, in manageable chunks. Using a low-code development platform like OutSystems, IT teams are able to tackle projects quickly and easily, achieving fast development times while seamlessly integrating with existing technology. You can achieve all of this without the need for huge Dev-Ops teams. This not only saves money during the development stages, it also has notable long-term financial implications especially in terms of competitive advantage.
Advertisement: Gigamon
Successful digital transformation programs impact processes, products, services, and suppliers. To achieve dramatic results an overarching framework is needed where all the resources of an organisation are strategically directed around collaborative interactions. The customer is positioned at the heart of the organisation and is the focal point of the digital transformation initiative. These organisations understand that providing a great customer experience can maximise value and engagement.
It’s equally important to remember that customer expectations and preferences have drastically evolved; customers expect goods and services quickly so the delivery must be seamless, it must be fast, and it must be easy. For many organisations evolving customer expectations and preferences have forced them to re-think how they define and deliver their products and services to the end user. Meeting customer expectations let alone exceeding them is becoming progressively difficult in the digital economy. As a result, organisations are investing in digital initiatives to better attract, engage, and retain customers. More efficient processes, more innovative services, better products, alternative delivery channels, more responsive engagement channels; these are no longer business aspirations, these are necessities. As I stated earlier however this needn’t cost the world.
Beyond budget I would argue that organisational culture is actually one of the most critical considerations for any digital transformation. Culture shapes the attitudes, beliefs and aspirations of individuals, teams and entire organisations. It influences the strategies that an organisation employs to better deliver goods and services to customers. It influences how an organisation structures itself, and the operational processes it employs. The culture therefore must be considered and all parts of the organisation must be onside with the transformation and understand why it is necessary and what it will achieve. If this doesn’t happen staff won’t be engaged and the transformation won’t be as successful as it could be.
As the New Year begins, there are new trends and technologies such as big data, VR/AR and API that will be at the core of digital transformation efforts in 2017. Digital transformation is no longer an option, it is now a necessity. The ability to build an agile and adaptable organisation that can rapidly change both its technology and its culture without breaking the bank will be core to thriving in a time of heightened business disruption.
Employees leaving organisations due to poor tech
They are known as the ‘head down generation’; so named for the way they move through life permanently glued to smartphones and tablets. The latest employees to enter the workforce are digitally savvy and hugely demanding when it comes to technology.
By Keith Tilley, EVP at Sungard Availability Services.
These employees are faced with new opportunities on a daily basis, and are not afraid to explore their options. Seven in ten young people plan to leave their job in the next five years, leaving organisations under huge pressure to deliver the best possible working environments in a bid to retain talent – from great benefits to cutting edge tech.
Keeping the best talent has always been a challenge, and for many organisations it’s only getting harder.
This isn’t idle speculation: our recent research1 found that over a fifth of office workers admitted to leaving a job because they didn’t feel they had access to the latest digital technology. Of course, when you consider that 76 per cent of employees believe that having the right digital tools are crucial to their role it’s easy to see how frustrations can escalate quickly. Businesses who fail to listen to employee demands and invest in the tools they need could soon find themselves rapidly losing headcount.
People understandably want to work for the most innovative companies: the ones that are making waves. Yet nearly 39 per cent of employees don’t think their employers are moving fast enough when it comes to digital transformation. In this age, not having the right technology in place can leave you trailing behind the competition, and is increasingly enough to cost a business its top talent. Businesses simply cannot afford to lose any talented employees, not when tech skills are becoming so valued and sparse. In the pressing war for talent, the simple answer lies in the need to invest in digital tools. However, in established enterprise organisations, existing legacy IT can cause problems when integrating new technology. New applications may not be compatible with current systems, meaning a full overhaul of an IT system would be needed. For most organisations, they simply do not have the resources or time to overhaul these systems, therefore the long term gains of nurturing a digital business are often put on the back-burner.
Another issue is the growing digital disconnect felt by employees. Although the majority of the workforce recognises the importance of digital technologies, nearly a third noted that it has made their job more stressful while 30 per cent claimed these tools have made their role more difficult. For many organisations, this is a serious obstacle in securing digital investment: if the IT department are met with push back from employees, it will be much harder to make a business case to those who control budgets.
Advertisement: 8Solutions
That said, when used effectively, digital tools more than prove their worth. Over half of IT decision makers felt that digital success equated to both increased customer service and satisfaction, and 43 per cent felt it contributed to revenue growth. But the tools themselves are not enough; throwing money at the problem will only take you so far.
Organisations need to build the right environment and culture, as well as provide education for employees to help them use the tools to optimal effect. Having the right technical skills and receiving the right training were named as the two biggest challenges hindering digital transformation. 34 per cent of workers found there wasn’t enough training, whilst a further 23 per cent said the training they had was inadequate.
Organisations must consider a continuous investment in training to make sure employees are competent and happy with the tech they have to work with on the day to day. With the IT skills gap getting bigger by the day, securing the future of your employees and business by investing in their skills has never been more integral.
Beyond this, assessing and developing an agile company culture is also a good way of ensuring a good return on the investment of digital tools. Early adopters of technology can help to increase a wider uptake if these people are harnessed to influence employees towards the cause. Once you begin to encourage employees to embrace changes to technology, future tech should be easier to incorporate; increasing adoption rates and impacting the business sooner rather than later.
It’s clear that digital transformation isn’t a straight path to success. It requires various stages of investment, and can feel too time and capital consuming for the effort, especially when processes are ticking over well in a business. But to remain competitive, things can’t just tick over. They need to exceed and be innovative. If you don’t do it, your competitors will; and they’ll likely poach your employees in the process.
1. About theResearch
Research was conducted by Vanson Bourne, on behalf of Sungard Availability Services, to investigate attitudes towards digital transformation in five countries across the world, focusing on expected benefits, challenges and business demands. Interviews were conducted in May 2016 across two groups of respondents: IT decision makers (ITDMs) and employees from the wider business. The research questioned respondents from businesses of over 500 employees in the US, UK and France, and respondents from businesses with a minimum of 250 employees in Ireland and Sweden. These businesses operated in a variety of sectors, including financial services, professional services and retail.