Risk assessment is the obvious theme that dominates at the present time. The imminent arrival of the GDPR legislation has required all organisations to undertake a strategic review of the way in which they obtain, process, store and retrieve their data – although the GDPR suggests that the data actually belongs to the customers! I have seen so many weird and wonderful emails and other communications from companies clearly keen to be seen to be doing something about what we shall call the business of ‘data logistics’, but it’s not immediately obvious that many of the senders of these missives understand what the GDPR does or doesn’t require. In the case of GDPR, the risk assessment process appears to have been carried out rather too diligently! Still, it will be fun to watch what happens after 25 May. No doubt Fleet Street will ‘engineer’ shocking examples of how some unfortunate blue chip corporation or other plays fast and loose with its customers’ data and there may even be some court cases to add to the general entertainment. Whether we as private citizens will notice any major changes – especially when it comes to the companies that ignore the requirements of the existing Data Protection Act – remains to be seen.
And then we have the unfortunate IT meltdown at TSB. Pity the banking sector, struggling to cope with the shame of bringing the world to its knees in 2008 (!), and now, increasingly frequently, beset with IT-related problems. Data breaches are almost impossible to prevent, but IT crashes are, almost invariably, totally avoidable. It’s all about risk assessment. Making sure that every angle of a planned IT refresh/migration is looked at in detail and the knock-on impact of every single stage of the process is fully understood and allowed for. Disappointingly, the explanation for the kind of IT disaster that hits the headline is never a much more complicated variation of: “We didn’t realise that there would be a minor compatibility issue between our 30 year old application and the new servers on which it now sits”. Really? What’s most depressing here is the fact that while almost every one of our personal possessions is refreshed on something rather shorter than a 30 year cycle (clothes, gadgets, cars, white goods, we even tend to ‘refresh’ our home within this time frame), it’s apparently accepted business practice to rely on a very old application because ‘it’s too difficult and expensive’ to re-write. And I guess freezing out customers for a few days is ‘simple and cheap’?!
Hey-ho. No one pretends that the IT world is easy to control, but it would be great to think that the many household names who, quite rightly, pride themselves on their innovation and their desire to embrace digital transformation, don’t let themselves down by lazy logic and risk assessment. There’s not much point if you have the world’s greatest digital infrastructure, but allow a single point of failure to render it all but useless.
The Data Economy Report by Digital Realty reveals that the UK’s data is worth £73.3 billion to the UK economy, powered by the country’s data centre industry.
In the most comprehensive, first-of-its-kind look at the contribution that data provides to the UK economy, an independent report commissioned by Digital Realty reveals that the UK’s data economy is currently worth £73.3 billion annually. This figure reflects how data is making businesses’ existing services simpler, faster and more reliable, as well as enabling them to open up new business possibilities, such as new operating models, new revenue streams and new markets to enter.
The Data Economy Report also highlights the continued contribution data brings, with growth (7.3%) outstripping the wider economy (1.8%). This growth is powered by the UK’s data centre industry – the industry creates between £291 million and £320 million in value every year from each data centre, with the range even higher for new data centres: £397-£436 million in extra annual value from each new data centre.
Data centres create this value by providing and managing the infrastructure, connectivity and services that underpin success across the full range of economic activity. This includes not only I.T. and financial services, such as powering high-speed trading platforms and cloud storage services, but also other sectors such as agriculture where data allows more precise use of pesticides, better adaptation to weather trends and automation such as drones to survey crops.
Investment in the data centre foundations which enable all this is essential for the future prosperity of British businesses and the economy.
“The UK’s goal is to lead the world in creating innovative businesses, and continued growth of its data economy is vital in achieving that goal.” said Jeremy Silver, CEO, Digital Catapult. “The Data Economy Report provides a clear road map for businesses, suggesting that by investing in the right foundations, technology innovations and partners, they will grab a significant dividend.”
The £6.2 billion added value that data centres create demonstrates the rewards to be won by businesses investing in their data infrastructure. In fact, business investment increases of 5%-11% in data infrastructure and new technologies like IoT sensors and big data analysis software could mean the difference between the UK data economy growing at its current rate and hitting £94.6 billion by 2025 and a best case scenario in which it grows to £101.6 billion by 2025, creating a £7 billion per year data dividend.
“Data infrastructure and services underpin the UK’s digital economy but its value is often underestimated. With The Data Economy Report, its worth to businesses and the wider economy is apparent,” said Rob Coupland, Managing Director EMEA, Digital Realty. “We urge British businesses to invest in the right tools, infrastructure and partners to get more value out of data and take a piece of a £101.6 billion national opportunity.”
Survey data shows transformed companies are 22x more likely to get new products and services to market ahead of the competition.
Dell EMC has published the results of new research conducted by Enterprise Strategy Group (ESG) into the benefits of IT Transformation which validates that IT Transformation can result in bottom-line benefits that drive business differentiation, innovation and growth.
Today’s business landscape is rife with disruption, much of it driven by organisations using technology in new or innovative ways. In order to survive and thrive in today’s digital world, businesses are implementing new technologies, processes and skillsets to best address changing customer needs. A fundamental first step to this change is transforming IT, to help organisations bring products to market faster, remain competitive and drive innovation. According to ESG’s 2018 IT Transformation Maturity Study [ii] commissioned by Dell EMC and Intel:
81 percent of survey respondents agree if they do not embrace IT Transformation, their organisation will no longer be competitive in their markets, up from 71 percent in 2017. 88 percent of respondents say their organisation is under pressure to deliver new products and services at an increasing rate. Transformed organisations are 22x as likely to be ahead of the competition when bringing new products and services to market.Transformed organisations are 2.5x more likely to believe they are in a strong position to compete and succeed in their markets over the next few years. Transformed companies are 18x more likely to make better and faster data-driven decisions than their competition and are 2x as likely to exceed their revenue goals.
“Data is the new competitive edge – yet it’s become highly distributed across the edge, the core data center and cloud. Organisations realise they have to move quickly to turn that data into business intelligence – requiring an end-to-end IT infrastructure that can manage, analyse, store and protect data everywhere it lives,” said Jeff Clarke, Vice Chairman, Products and Operations, Dell Technologies. “We’re in the business of better business outcomes, giving our customers the ability make that end-to-end strategy a reality, driving disruptive innovation without the fear of being disrupted themselves.”
The ESG 2018 IT Transformation Maturity Study
The ESG 2018 IT Transformation Maturity Study follows the seminal study commissioned by Dell EMC, the ESG 2017 IT Transformation Maturity Study, and was designed to provide insight into the state of IT Transformation, the business benefits fully transformed companies experience, and the role critical technologies have in an IT Transformation. ESG employed a research-based, data-driven maturity model to identify different stages of IT Transformation progress and determine the degree to which global organisations have achieved those different stages, based on their responses to questions about their organisations’ adoption of modernised data center technologies, automated IT processes and transformed organisational dynamics.
“Companies today need to be agile to stay competitive and drive growth, and IT Transformation can be a major enabler of that,” said John McKnight, Vice President of Research, Enterprise Strategy Group. “It’s clear that IT Transformation is increasingly resonating with companies and that senior executives recognise how IT Transformation is pivotal to overall business strategy and competitiveness. While achieving transformation can be a major endeavour, our research shows ‘Transformed’ companies experience real business results, including being more likely to be ahead of the competition in bringing new products and services to market, making better, faster data-driven decisions than their competition, and exceeding their revenue goals.”
This year’s 4,000 participating organisations were segmented into the same IT Transformation maturity stages:
Stage 1 – Legacy (6 percent): Falls short on many – if not all – of the dimensions of IT Transformation in the ESG study. Stage 2 – Emerging (45 percent): Showing progress in IT Transformation but having minimal deployment of modern data center technologies. Stage 3 – Evolving (43 percent): Showing commitment to IT Transformation and having a moderate deployment of modern data center technologies and IT delivery methods. Stage 4 – Transformed (6 percent): Furthest along in IT Transformation initiatives.
This year’s findings show organisations are progressing in IT maturity and generally believe transformation is a strategic imperative.
96 percent of respondents said they have Digital Transformation initiatives underway – either at the planning stage, at the beginning of implementation, in process, or mature. Respondents whose organisations have achieved Transformed status are 16x more likely to have mature Digital Transformation projects underway versus Legacy companies (66 percent compared with 4 percent). Transformed organisations were more than 2x as likely to have exceeded their revenue targets in the past year compared with Legacy organisations (94 percent compared to 44 percent). 84 percent of respondents with mature Digital Transformation initiatives underway said they were in a strong or very strong position to compete and succeed.
IT Transformation maturity can accelerate innovation, drive growth, increase IT efficiency and reduce cost. More specifically:
Transformed organisations are able to reallocate 17 percent more of their IT budget toward innovation. They complete 3x more IT projects ahead of schedule and are 10x more likely to deploy the majority of their applications ahead of schedule. Transformed organisations also report they complete 14 percent more IT projects under budget and spend 31 percent less on business-critical applications.
Making IT Transformation and Digital Transformation Real
Organisations like Texas-based Rio Grande Pacific understand IT Transformation benefits first-hand. The company has branched from a railroad holding company – moving and physically handling railcars – into a provider of technology services for other short line railroads and commuter operations. Rio Grande Pacific pursued IT Transformation to support its aggressive growth. By modernising its data center, the company has increased speed of services tenfold, experienced a 93 percent reduction in data center electricity use, significantly improved rack performance and provisioning time, and created a new business – the “RIOT” domain or Railway Internet of Things.
“As part of a 150-year-old industry, we recognise that the future of rail is tied to technology,” said Jason Brown, CIO, Rio Grande Pacific. “Railroads are in need of real-time information in order to make rapid decisions. Combining several systems into one single dashboard though our RIOT domain provides a holistic view to customers and helps keep the trains running on time. These new services, using the most modern technology, sets Rio Grande Pacific apart from the competition and has led to strong growth.”
Bank Leumi, Israel's oldest and leading banking corporation, is also experiencing the benefits of IT Transformation, bringing to life its mobile-only bank, Pepper. The organisation set out to create a platform that provides customers with a better experience, engages them quicker and reaches a new generation of clients. In order to do this, the company needed a faster, more flexible infrastructure and began leveraging a hybrid cloud model and software-defined data center. This has allowed them to move code from development to production within hours, compared to weeks, establish new environments faster and do this at less cost. This has helped them to bring a new, innovative product to market.
“We are in the midst of an era of digital disruption, where customer demands and expectations are changing rapidly,” said Ilan Buganim, Chief Technology and Chief Data Officer, Bank Leumi. “We as a bank need to adapt ourselves and continue providing a superior customer experience. We saw the opportunity to do this with our new mobile-only bank, ‘Pepper.’ Moving to a hybrid cloud model and a software-defined data center environment provided the infrastructure needed for real-time banking, with the ability to run fast and to shortcut the time to deliver new functionalities - thus making this new customer experience possible.”
Worldwide information and communications technology (ICT) spending, including new technologies, is expected to exceed $5.6 trillion in 2021 with growth accelerating through the end of the forecast period as new categories account for a growing proportion of overall investments, according to the latest version of the Worldwide Black Book: 3rd Platform Edition from International Data Corporation (IDC).
By 2021, new 3rd Platform technologies, including Internet of Things (IoT) solutions, robots and drones, augmented reality and virtual reality (AR/VR) headsets, and 3D printers, will account for almost a quarter (23%) of total ICT spending. Overall, 3rd Platform investments, including the F our P illars of cloud, mobile, big data & analytics, and social, will make up more than 70% of worldwide ICT spending.
The fastest-growing technology markets last year were AR/VR, cognitive and artificial intelligence (AI), 3D printing, and robotics. Meanwhile, IoT has already grown to account for 15% of ICT spending, including new operational technology (OT) software and services, which represent expanding opportunity and potential disruption for traditional software and services vendors. But while the adoption of 3rd Platform technologies is broadly positive across all countries, there are key geographic differences in terms of early adoption and short-term opportunities.
"Mature economies are leading the way in some 3rd Platform markets, thanks to more advanced cloud infrastructure and software innovation driving rapid adoption of solutions around big data and analytics, cognitive AI, and cloud-based software," said Stephen Minton, vice president, Customer Insights & Analysis at IDC. "It's a different story with technologies that are focused on industrial use cases in the manufacturing industry, such as IoT and robotics. Emerging leaders like China are driving much of the innovation in real-world deployments around these industry-focused technologies."
China accounted for 28% of worldwide IoT spending in 2017, and 29% of total robotics investments, compared to just 12% of traditional ICT spending categories (hardware, software, services and telecom). Japan and some other Asia/Pacific countries are also early adopters of robotics and IoT. 3D printing has seen strong early adoption in China and Germany. Cognitive AI investments are dominated by U.S. businesses, who are also leading the way in AR/VR prototypes. Emerging markets, such as India and Brazil, are major contributors to overall mobility spending, but are still playing catch up when it comes to cloud.
"While the traditional ICT market has become more homogenous in the last few years, as emerging markets caught up to mature economies and often leapfrogged legacy technologies in their adoption of mobile solutions, the 3rd Platform brings with it a new period of fragmentation," added Minton. "The U.S. is once again at the forefront of much new software innovation, while countries like China and Germany are driving industry-focused categories. Understanding these regional and country-level differences, including the drivers and inhibitors behind likely adoption curves for new technologies, will be key to ICT vendor strategy in the next 5-10 years."
The latest version of IDC's Worldwide Black Book: 3rd Platform Edition includes forecasts for 33 countries segmented by 44 technologies and 11 platforms. IDC defines the 3rd Platform as a leading driver and force of innovation consisting of the Four Pillars of cloud, mobility, big data & analytics, and social, plus the I nnovation A ccelerators of IoT, robotics, cognitive AI, AR/VR, 3D printing, and next-generation security.
Research indicates UK IT departments spend just 12 per cent of their time on innovation.
New research commissioned by managed services provider Claranet has revealed that a failure to automate IT processes and a heavy reliance on manual intervention is hindering the ability of organisations to embrace innovation. Despite a prediction by Gartner that 75 per cent of enterprises will have more than six diverse automation technologies within their IT management portfolios by 2019, almost half of organisations still have some way to go to put this into practice.
The research, conducted by Vanson Bourne and surveying 750 IT and Digital decision-makers from a range of organisations across Europe, is summarised in Claranet’s new Beyond Digital Transformation research report. The findings reveal that infrastructure configuration at nearly half (48 per cent) of UK businesses remains mostly or heavily manual. At the other end of the scale, only 11 per cent said that their infrastructure is highly automated. This is having a direct impact on the amount of time IT teams spend on maintenance and administration tasks.
According to the survey responses, IT teams spend over half (53 per cent) of their time on operational projects, general maintenance, responding to user problems, and unplanned work, with just 12 per cent of their time focused on new approaches that can lead to real business improvement and innovation.
For Michel Robert, Managing Director at Claranet UK, these responses underline the scale of the work that needs to be done to improve the efficiency of the IT department, as well as the overall impact it has on the wider business:
“Automation is a critical enabling technology that can give organisations the agility, speed, scalability, resilience and compliance they need to compete and succeed in the age of digital business.
“Unfortunately, it appears that many UK companies are struggling to adopt automation, from both an infrastructure and application perspective. This not only makes the day-to-day activity of the IT department less efficient, but also has a negative impact on the wider business, as new initiatives that are underpinned by technology cannot be leveraged to their full potential. At the same time, this lack of automation opens up the organisation to the threat of human error, and the financial and administrative impact this can have.”
In order to effectively address this low level of automation, Michel believes that businesses need to focus on making a series of organisation-wide changes to help reduce manual processes and facilitate greater efficiency. This should include taking steps to free up time for IT teams, and implementing processes to help join up various departments and responsibilities more effectively. To be a success, all of this needs steadfast support from leaders across the business.
“Crucial to making automation more commonplace is a commitment by leadership to making it a reality. This means that the C-suite – whether directly involved with IT or otherwise – need to be fully aware of its benefits and work together to create and implement plans to increase the pace of automation. This includes working towards freeing IT teams of the burden of the more basic maintenance and administration tasks, and then introducing comprehensive, well-planned processes that join up everything that goes on in the IT department. Partnering with an external service provider can be an effective way of bringing these changes to fruition.”
He concluded: “By taking an agile approach to increasing automation which includes organisation-wide support and openness to working with third-party providers, organisations can gain the means to accelerate the move to automation across both infrastructure and applications, without diverting time and resource from the IT department. If this approach is taken, addressing the automation challenge need not be a daunting task and can be addressed in a practical manner, and IT teams can begin to think more deeply about how they can drive the needs of the wider business.”
Jaguar Land Rover deploys their data analysis files from a local data centre to Google Cloud Storage without losing compatibility with DataFinder by using the SME Native Drive.
Jaguar Land Rover has deployed Storage Made Easy’s Cloud Drive in the Google Cloud, allowing critical file system-oriented applications to access data stored securely on Google Object Storage.
Jaguar Land Rover (JLR) uses National Instrument’s DataFinder to index and analyse over a terabyte of time-series data created each day. The data is generated from test drive sensors and measurements, and adhoc measurements from 400 dataloggers in the powertrain calibration and controls department. It is then and saved directly on Google Cloud Storage. The challenge has been how to take advantage of the cloud for increased durability and agility when the application can only index and manage data from file-based storage systems.
The Storage Made Easy Google Storage Drive allows applications, and end users, to connect to Google Cloud Storage as if it was a local read-write hierarchical file system. The drive allows Jaguar Land Rover to move their data management and analytics platform to Google Cloud with minimal migration costs.
The Cloud Drive is optimised for large data sets and deep folder hierarchies and does not obfuscate object metadata, allowing concurrent access by apps directly to Google Cloud Storage.
Maxime Lecuona, Power Train Calibration and Controls Data Processing team leader at Jaguar Land Rover, said: “Moving our infrastructure to Google cloud is a critical project for us. We are planning on leveraging this new platform to gain in flexibility and ultimately provide our services to a wider portion of the company. Storage Made Easy Google storage drive is allowing us to transfer our current infrastructure to this new environment with minimal change to the existing code, and at a very reasonable price.”
Jim Liddle, Storage Made Easy CEO, said: “Cloud technology is undoubtedly a great way for companies to free up IT to concentrate on innovation, but there is still a need to bridge the gap between remote cloud storage and critical business applications, and this is what the Storage Made Easy solutions provide.”
Improves scalability, stability and increase regulatory-compliant access to data.
Spirit Healthcare, a supplier of innovative products and services that aims to make the nation healthier and happier by delivering real value in healthcare, has welcomed Netmetix as its expert cloud provider to migrate the business to the cloud.
The main drivers to migrate the business to the cloud included increased stability over traditional on-site servers and mitigation of data loss risk. Another driver was scalability. The company had been expanding at a rapid rate over the last three years and required a solution that could grow alongside the business. Introducing more equipment on site was not a practical option, so gaining scalability without the added physical apparatus was key. The final driver for migration was impending regulatory compliance; employees required ease of access to information, including Intranet-based files, in one hosted central location, whether working in the head office or remotely. Due to the sensitive nature of some of the information, the data needed to be hosted in a secure setting and regulated for the GDPR.
In order to address the issues raised, Spirit evaluated several options and moving to a cloud environment made the most sense, where firewalls, enterprise-grade security and file accessibility was key. Netmetix was able to offer exactly what they required.
After speaking with various other cloud providers, it was a recommendation from other companies that have worked with Netmetix that sealed the deal. The level of service provided to its other customers was a big deciding factor for the Spirit team.
Kirk Harland, Head of IT at Spirit Healthcare stated, “We have clear ambitions for the business over the next 12 months and having a cloud hosted system is key to making those ambitions a reality. From supporting process improvements that enable our trusted employees to adhere to new data regulations such as GDPR, to solving the issue of multiple data sets across various databases and platforms, with Netmetix’s cloud implementation underway, it won’t be long until all of our employees working remotely, are able to securely access any file required for their day to day work activity, rather than needing to be onsite to gain this access.”
Managing Director at Netmetix, Paul Blore adds, “We’re really excited to be working with Spirit Healthcare and we’re thrilled they approached us after receiving recommendations from some of our other clients. We will work with Spirit to deliver a future-proof, scalable infrastructure that can support their business goals and ambitions.”
Ensono chosen to help align the organisation’s business and IT strategies.
City & Guilds Group, the leading provider of global skills development, has selected Ensono to lead the migration of its critical on-premise applications to a managed Microsoft Azure environment. The migration closely follows Microsoft’s announcement in November 2017 detailing how Microsoft Azure will allow customers to run SAP S/4HANA in a secure, managed cloud. This mission critical replatforming exercise will enable City & Guilds Group’s IT strategy to facilitate the broader business plan of diversification through acquisition. The project future-proofs the skills organisation, ensuring it can integrate additional acquisitions and services for enterprise level qualifications, e-learning and executive coaching.
City & Guilds Group have appointed Ensono to migrate applications including SAP, Sitecore, E-volve, Smartscreen and Biztalk, increasing business agility and driving cost savings of well over £1 million a year – helping offset the loss of public sector funding. Ensono’s strategic purchase of Inframon, which has over a decade of Microsoft cloud migration and management expertise and is an award winning Microsoft partner, was crucial in City & Guilds Group’s decision.
The SAP on Azure migration will help City & Guilds Group deliver a better service to its customers and provide greater flexibility, enabling the organisation to scale up and down as demand requires.
Alan Crawford, CIO, City & Guilds Group , said: “This is City & Guilds Group’s first migration of mission-critical applications to a managed public cloud environment. The replatforming will increase our ability to scale up and down, depending on business needs and as we add new services, courses and companies to our portfolio. In addition to the cost-savings from becoming a cloud-first organisation, the replatforming will also drive innovation across the business. Innovation is at the heart of what we do and this project, with Ensono and Microsoft working as true partners, means City & Guilds Group will continue on the path to becoming the world’s leading skills development and e-learning organisation, with a passion for constant innovation.”What will it take to get investment?
Asks Tony Lock, Distinguished Analyst and Director of Engagement, Freeform Dynamics Ltd.
There is a quote that almost everyone working in a management role will recognise: “if you can’t measure it, you can’t manage it”. Naturally enough, IT is no exception. After all, if you haven’t got good visibility of how the data centre is running, how can you ensure it keeps functioning effectively, and efficiently? This makes the title of a recent research note by Freeform Dynamics, The monitoring capability gap (link: https://freeformdynamics.com/core-infrastructure-services/monitoring-capability-gap/), something of a concern for data centre managers.
Starting with the basics, the report makes it very clear that the monitoring tools used by respondents – and maybe you – to keep systems and services operational have plenty of scope to be improved. (Figure 1).
The first thing that leaps out from the results is that half of respondents say the monitoring tools they use do not provide all the functionality they need, twice as many as those who say that they do. Even if everyone always wants more (and we do), the other responses reveal other issues that need attention. For example, significant numbers of those taking part report that their monitoring capabilities do not allow the speedy identification of the root cause of a problem.
Even though a majority of those taking part in the survey agree or strongly agree that their tools provide good end to end visibility, almost a quarter disagree or strongly disagree. Indeed, almost one in three say their tools do not allow them to proactively manage systems, a matter of concern when data centre complexity is increasing, and as line of business users become ever more intolerant of IT problems or slow-downs, since so many business functions can no longer operate without IT. Add Cloud and other external resources into the mix and it is easy to see why many IT teams sometimes have to battle with monitoring and diagnostics.
Data centre and infrastructure monitoring have traditionally been at the heart of IT operations, and the results show this is very much still the case. (Figure 2).
Business users have obviously benefited greatly as computer systems have become ever more reliable, and as data centre managers have built systems with availability a primary concern. But while resilience was once the preserve of just a few systems, the portfolio of applications and services regarded as critical has now expanded dramatically. This explains why, despite the advances in core IT technologies, monitoring the health of systems and components still consumes considerable time and resources. When combined with the fact that critical business operations rely on IT systems not just to be available but also on them delivering excellent performance, it is clear that the need for accurate, sophisticated monitoring is not going away any time soon (Figure 3).
The results highlight the widespread distribution of the challenges faced by data centre professionals today. It is very clear that in every area – even in the fundamentals of problem identification and system visibility – there are usually more survey respondents finding things difficult or very challenging than there are who report no challenges.
What is less obvious to anyone outside of IT is just how important monitoring tools are in these matters. This may well reflect that unless you operate monitoring tools, day in, day out, it is very easy to overlook just how significant they are. It is even true that some IT professionals can take things for granted, especially if they have built complex scripts and daemons to help with some routine functions.
The IT landscape is changing dramatically and rapidly, innovative technologies are arriving in the data centre every year with cloud systems becoming important to an expanding range of business services. And then there are new architectures for developing applications, such as containers and serverless, that are beginning to hit the mainstream. Taken together these make the results at the bottom of the chart perhaps the most alarming. Despite the changes mentioned above, over half of those taking part report it is at least quite difficult, or even very challenging, to get agreement to upgrade monitoring capabilities to cater for the new IT landscape. Even more of them report that it is hard to get budgets to fix things that are sustainable in the long term.
But can anything trigger investment for monitoring capabilities (Figure 4)?
Perhaps unsurprisingly the events that are likely to have the biggest influence on triggering investment in monitoring tools are anything but desirable. Even in a world where customer trust has never been so important, and the attention paid to data security is gathering momentum, the event most likely to get the business ready to spend money is a significant security breach, with significant systems outage only slightly behind.
It is interesting to note that far fewer respondents stated that a big security scare in the press would have the same impact. With GDPR a reality from May 25th, 2018 and the potential for penalties of up to €20 million or 4% of worldwide revenues, I suspect that once a large fine has been handed out the first time, the number of organisations investing in monitoring technologies may well increase dramatically.
The Bottom Line
Every manager and IT professional working in a data centre understands just how important IT is to keep the business running. This makes monitoring technologies more relevant and essential than they have ever been. The complexity of IT systems, the critical functions they support and the stress under which they must operate are all increasing the need to ensure systems run effectively.
At the same time, financial demands dictate they be run effectively – underutilised IT resources are becoming a luxury of the past. Monitoring is essential, but has been underappreciated. Yet if you can get it right, things are more likely to work well; conversely, if you neglect it, the negative impact for the business will be noticed.
The next Data Centre Transformation events, organised by Angel Business Communications in association with DataCentre Solutions, the Data Centre Alliance, The University of Leeds and RISE SICS North, take place on 3 July 2018 at the University of Manchester and 5 July 2018 at the University of Surrey. The programme is nearly finalised (full details via the website link at the end of the article), with some some top class speakers and chairpersons lined up to deliver what is probably 2018’s best opportunity to get up to speed with what’s heading to a data centre near you in the very near future!
For the 2018 events, we’re taking our title literally, so the focus is on each of the three strands of our title: DATA, CENTRE and TRANSFORMATION.
This expanded and innovative conference programme recognises that data centres do not exist in splendid isolation, but are the foundation of today’s dynamic, digital world. Agility, mobility, scalability, reliability and accessibility are the key drivers for the enterprise as it seeks to ensure the ultimate customer experience. Data centres have a vital role to play in ensuring that the applications and support organisations can connect to their customers seamlessly – wherever and whenever they are being accessed. And that’s why our 2018 Data Centre Transformation events, Manchester and Surrey, will focus on the constantly changing demands being made on the data centre in this new, digital age, concentrating on how the data centre is evolving to meet these challenges.
We’re delighted to announce that Adam Beaumont, Visiting Professor of Cybersecurity at the University of Leeds, and CEO of aql, will be delivering the Simon Campbell-Whyte Memorial Lecture. Has IT security ever been so topical? What a great opportunity to hear one of the industry’s leading cybersecurity experts give his thoughts on the issues surrounding cybersecurity in and around the data centre.
We’re equally delighted to reveal that key personnel from Equinix, including MD Russell Poole, will be delivering the Hybrid Data Centre keynote addresss at both the Manchester and Surrey events. If Adam knows about cybersecurity, it’s fair to say that Equinix are no strangers to the data centre ecosystem, where the hybrid approach is gaining traction in so many different ways.
Completing the keynote line-up will be John Laban, European Representative of the Open Compute Project Foundation.
Alongside the keynote presentations, the one-day DCT events will include:
A DATA strand that features two workshops - one on Digital Business, chaired by Prof Ian Bitterlin of Critical Facilities and one on Digital Skills, chaired by Steve Bowes Phipps of PTS Consulting.
Digital transformation is the driving force in the business world right now, and the impact that this is having on the IT function and, crucially, the data centre infrastructure of organisations is something that is, perhaps, not as yet fully understood. No doubt this is in part due to the lack of digital skills available in the workplace right now – a problem which, unless addressed, urgently, will only continue to grow. As for security, hardly a day goes by without news headlines focusing on the latest, high profile data breach at some public or private organisation. Digital business offers many benefits, but it also introduces further potential security issues that need to be addressed. The Digital Business, Digital Skills and Security sessions at DCT will discuss the many issues that need to be addressed, and, hopefully, come up with some helpful solutions.
The CENTRES track features two workshops on Energy, chaired by James Kirkwood of Ekkosense and Hybrid DC, chaired by Mark Seymour of Future Facilities.
Energy supply and cost remains a major part of the data centre management piece, and this track will look at the technology innovations that are impacting on the supply and use of energy within the data centre. Fewer and fewer organisations have a pure-play in-house data centre real estate; most now make use of some kind of colo and/or managed services offerings. Further, the idea of one or a handful of centralised data centres is now being challenged by the emergence of edge computing. So, in-house and third party data centre facilities, combined with a mixture of centralised, regional and very local sites, makes for a very new and challenging data centre landscape. As for connectivity – feeds and speeds remain critical for many business applications, and it’s good to know what’s around the corner in this fast moving world of networks, telecoms and the like.
The TRANSFORMATION strand features workshops on Automation (AI/IoT), chaired by Vanessa Moffat of Agile Momentum and The Connected World. together with a Keynote on Open Compute from John Laban, the European representative of the Open Compute Project Foundation.
Automation in all its various guises is becoming an increasingly important part of the digital business world. In terms of the data centre, the challenges are twofold. How can these automation technologies best be used to improve the design, day to day running, overall management and maintenance of data centre facilities? And how will data centres need to evolve to cope with the increasingly large volumes of applications, data and new-style IT equipment that provide the foundations for this real-time, automated world? Flexibility, agility, security, reliability, resilience, speeds and feeds – they’ve never been so important!
Delegates select two 70 minute workshops to attend and take part in an interactive discussion led by an Industry Chair and featuring panellists - specialists and protagonists - in the subject. The workshops will ensure that delegates not only earn valuable CPD accreditation points but also have an open forum to speak with their peers, academics and leading vendors and suppliers.
There is also a Technical track where our sponsors will present 15 minute technical sessions on a range of subjects. Keynote presentations in each of the themes together with plenty of networking time to catch up with old friends and make new contacts make this a must-do day in the DC event calendar. Visit the website for more information on this dynamic academic and industry collaborative information exchange.
Vote Now for DCS Awards 2018 – online voting closes 11 May.
With thousands of votes already cast for this year’s DCS Awards, the competition is hotting up. Online voting stays open until 17.30 on Friday 11 May so make sure you don’t miss out on the opportunity to express your opinion on the companies, products and individuals that you believe deserve recognition as being the best in their field.
Voted for by the readership of the Digitalisation World portfolio of titles, the Data Centre Solutions (DCS) Awards reward the products, projects and solutions as well as honour companies, teams and individuals operating in the data centre arena.
Winners of this year’s 21 categories will be announced at a gala ceremony taking place at London’s Grange St Paul’s Hotel on 24 May.
All voting takes place on line and voting rules apply. Make sure you place your votes by 11 May when voting closes by visiting: http://www.dcsawards.com/voting.php
The full 2018 shortlist is below:
Data Centre Energy Efficiency Project of the Year
Romonet supporting Fujitsu UK & Ireland
Riello UPS supporting the Rosebery Group
EcoRacks Data Centre supported by Asperitas
London One Data Centre by Kao Data
New Design/Build Data Centre Project of the Year
Inzai 2 by Colt Data Centre Services
University of Exeter University supported by Keysource
Data Hub, Biel supported by Schneider Electric
Kao Data London One supported by JCA Engineering
Data Centre Consolidation/Upgrade/Refresh Project of the Year
EcoRacks supported by Asperitas
Willis Towers Watson supported by Keysource
Generator Control Panel Replacement by CBRE DC Solutions
Consolidation and expansion Africa Data Centres
New data halls in Corsham and Farnborough by CBRE, ARK and Corning Optical Communications
Data Centre Fire Protection by Bryland Fire Protection Ltd
Data Centre Power Product of the Year
Liebert® APM 30-600 kW by Vertiv
Delta 500kVA UPS by Eltek Power
Integrated Terminal Lug Temperature Sensors by Starline UE
Micro Data Center by Optimum Data Cooling
Data Centre PDU Product of the Year
Intelligent Power Distribution Unit (iPDU) Family by Excel Networking solutions
SmartZone G5 Intelligent PDUs by Panduit Europe
High Density Outlet Technology (HDOT) by Server Technology, a brand of Legrand
Data Centre Cooling Product of the Year
Liebert® PCW High Chilled Water Delta T by Vertiv
En-10 DX by Optimum Data Cooling
1U immersion cooled server by Iceotope Technologies
Oasis Indirect Evaporative Cooler by Munters
Data Centre Facilities Automation and Management Product of the Year
Nlyte 9.0 Data Center Infrastructure Management (DCIM) Solution by Nlyte Software
Micro Data Center by Optimum Data Cooling
Diris Digiware Power Metering and Monitoring System by Socomec
Data Centre Safety, Security & Fire Suppression Product of the Year
303 ECO SSF cabinet by Dataracks
IG55 Extinguishing System by Bryland Fire Protection Ltd
Data Centre Cabling & Connectivity Product of the Year
4K HDMI Single Display KVM over IP Extender by ATEN Technology
EDGE™ Mesh Modules by Corning Optical Communications
LABACUS INNOVATOR SOFTWARE & Fox-in-a-Box by Silver Fox Ltd
Data Centre ICT Storage Product of the Year
Anti-Ransomware Data Protection by Asigra
GridBank's Enterprise Data Management Platform by Tarmin
Computational Storage Solutions by Scaleflux Computational Storage
JovianDSS by Open E
Cohesity DataPlatform by Cohesity
StorPool Storage by StorPool
Data Centre ICT Security Product of the Year
Automated Endpoint Security and Incident Response by Secdo
Cloud Protection Manager by N2W Software
SecuStack by SecuNet Security Networks
Data Centre ICT Management Product of the Year
Tarmin GridBank by Tarmin
VirtualWisdom 5.4 by Virtual Instruments
Ipswitch WhatsUp Gold® 2017 Plus by Ipswitch
HC3 platform by Scale Computing
EcoStruxure IT by Schneider Electric
ParkView by Park Place Technologies
Data Centre Cabinets/Racks Product of the Year
Environ CL Series by Excel Networking Solutions
Knürr DCD Rear Door Heat Exchanger by Vertiv Integrated Systems GmbH
303 ECO SSF cabinet by Dataracks
HyperPod Rack Ready System by Schneider Electric
Data Centre ICT Networking Product of the Year
PORTrockIT by Bridgeworks
Unity EdgeConnect by Silver Peak
Secure Cloud-Native Networking by Meta Networks
Data Centre Hosting/co-location Supplier of the Year
Workspace Technology
Colt Data Centre Services
Volta Data Centres
UKFast.Net Limited
Rack Centre
LuxConnect
Green Mountain
Data Centre Cloud Vendor of the Year
Zerto
N2W Software
PhoenixNAP
Zadara
Claranet
Asigra
Data Centre Facilities Vendor of the Year
Nlyte Software
Dataracks
Asperitas
Excellence in Data Centre Services Award
Rack Centre
Park Place Technologies
4D Data Centres Ltd
Data Centre Energy Efficiency Initiative of the Year
EU Horizon 2020 EURECA Project
Green Mountain
DAMAC
Data Centre Innovation of the Year
Cloud Protection Manager by N2W Software
ParkView by Park Place Technologies
Green Peak – Dashboard by Green Mountain
HyperPod Rack Ready System by Schneider Electric
Data Centre Individual of the Year
Anuraag Saxena, Ekkosense
Konkorija Trifonova, CBRE
Ole Sten Volland, Green Mountain
Dan Kwach, East Africa Data Centre
Although GDPR is probably the best-known example, a wave of regulation and compliance legislation is being enacted across the world, and particularly in Europe, as regulators get to grips with the modern data economy. This can mean conflicting requirements in some territories, or confusing messages for customers and organisations.
The trend in previous years towards a reduction in regulation seems to have ground to a halt. And while the tone and mood of the new rules such as GDPR is seen to be persuasive and “nudging” by those at the senior levels of policy-making, their actual implementation could well see a “big stick approach” by local and national law-makers.
GDPR itself already promises swingeing fines and there is every chance that prominent and perhaps not-so-prominent companies and organisations may be made examples of with some headline-grabbing penalties. Suppliers of Managed Services will have many new responsibilities and may well find themselves in the firing line as the legal implications take their courses.
This is the main point behind the latest speaker announcement for the European Managed Services & Hosting Summit 2018, to be held in Amsterdam on May 29, 2018. A full session will be devoted to the issue of working across the rising tide of compliance in Europe. With all indications that many MSPs are looking to expand by partnering or acquiring operations in other geographies, this will be an essential item for discussion at senior levels. Any senior figure in a managed services company will need to be familiar with both the processes and implications of the new levels and nature of compliance requirements in any territory they are working in, and beyond.
GDPR is not the only game in town, says Ieva Andersone, a senior associate from Sorainen, a major legal firm in the IT industry, based in the Baltics, a region with a high degree of interest in pan-European business relations and one of the fastest growing regions in IT generally and in managed services adoption. Parts of Europe, even parts of countries, will have their own local rules or GDPR interpretations, she will argue, which managed services companies will need to be aware of, and which may well apply to IT projects with connections outside their core territory. As an experienced, Cambridge-educated lawyer working in multiple cultures and markets in Europe, her presentation discusses the nature of the regulations, their intentions and direction and how they may affect suppliers of services, including managed services, in unexpected ways.
With plenty of discussion points on how to keep the MSP business on the right side of the law, and with guidance as to strategies to adopt, the annual Managed Services and Hosting Summit (MSHS) on May 29 in Amsterdam always aims to use experts to advise European MSPs on these major issues. The first keynote presentation, from Gartner, will address the key issue of how MSPs can differentiate themselves in an increasingly competitive market.
This MSHS event offers multiple ways to get answers: from plenary-style presentations from experts in the field to demonstrations; from more detailed technical pitches to wide-ranging round-table discussions with questions from the floor. There is no excuse not to come away from this key event with ideas for a strategy to keep the business out of trouble.
One of the most valuable parts of the day, previous attendees have said, is the ability to discuss issues with others in similar situations, and attendees are all hoping to learn from direct experience.
In summary, this is a management-level event, held in English, designed to help MSP and channel organisations identify opportunities arising from the increasing demand for managed and hosted services and to develop and strengthen partnerships, while keeping up with the latest compliance and legal requirements in multiple markets.
Registration is free-of-charge for qualifying delegates - i.e. director/senior management level representatives of Managed Service Providers, Systems Integrators, Solution VARs and channels. More details: http://www.mshsummit.com/amsterdam/register.php
In this month’s DCA journal we will be focusing on data centre security both physical and cyber. I’d like to address the issue of cyber Security first.
By Steve Hone, CEO & Founder The DCA
I spotted a billboard on the tube the other day that claimed you were 40% more likely to be a victim of cyber crime than to have your house robbed. This claim was backed up by the Office for National Statistics who have seen a steady rise in reported cybercrime year on year, with more than 6m incidents of cybercrime being reported each year.
This is far more than previously predicted and enough to nearly double the headline crime rate, that equates to more than 40% of all crimes committed in England and Wales.
Data centres represent a very attractive target for criminals. If someone manages to breach these defences, the data halls should be protected by a host of biometric security systems, man-traps and other security protocols, meaning physical access to the servers is in no way guaranteed, however this assumes that the criminal has a crowbar, swag bag and wearing a balaclava. What happens if the attacker is not planning on abseiling across the roof tops and dropping in through the air duct; what if he can break into your facility and steal your data or plant a virus/DDos all done from the comfort of his or her armchair with out you even knowing about it.
According to a Cyberthreat report, business-focused cyber-attacks – including ones specifically targeting datacentres – has increased by 144% in the past four years and data centres have become the number one target of cyber criminals, hacktivists and state-sponsored attackers. Although physical security should remain a top priority for datacentre operators, equal consideration needs to be given to the increasing threat posed by cyber-attacks with the same level of due care and attention.
Although I personally do not confess to be an expert when it comes to cyber security the good news is as the saying goes “I know someone who is” if fact the DCA has access to lots of members who could help so If you would like to speak to a specialist then the Trade Association can facilitate this for you.
That leads us nicely in to centre physical security
The aim of physical data centre security is, to keep out the people you don’t want in your building or accessing your data. Simply put if you are not on the list you can’t come in. Assuming their name is on the list; equally important once inside is to continue to keep an eye on them. If you discover that someone be it a customer, contractor or even staff member is guilty or suspected of committing a security breach, identify them as soon as possible - containment of the situation is paramount.
Through the Data Centre Trade Association, you have access to a wealth of specialists and experts and I would especially like to thank Datum, South Co, Chatsworth and EMKA who have all summitted articles in this month’s edition of the DCA journal.
When looking at physical security for a new or existing data centre, its sensible to first take 4-4 steps back and perform a risk assessment of the actual data and equipment that the facility will hold. Fully understanding the risks and potential breaches that could occur is essential; as is establishing the likelihood of such a breach taking place and the impact it could have on your business (be that reputational or financial). This type of drains up assessment should be your first port of call when defining your physical security requirements and determining how far you need to go.
I have often heard it said that “security of any facility needs to be like and an onion” made up of multiple layers of security which emanate out from what it is you are trying to protect.
When we are talking in terms of a physical data centre what typically makes up the layers of a data centre onion?
Keep a low profile: Especially in a populated area, you don’t want to be advertising to everyone that you are running a data centre.
Avoid windows: There shouldn’t be windows directly onto the data floor.
Fencing: Granted not always possible if located in a city location, which is where the avoid windows advice kicks in (see above); however, if you are going to have fencing make sure it not just a token gesture. There are plenty of guidelines when it comes to security fencing and the Trade Association can point you in the right direction in the form of fellow members who can offer you guidance as required.
Limit entry points: Access to the building needs to be controlled. Think not just main entrance and fire exits and loading bays but also roofs access points as well.
Fire Exits: When it comes to those fire exits make sure they are exactly that “exit only” (and install alarms and monitoring on them, as they are often frequented by smokers popping out for a crafty one, who then politely hold the door open for a stranger to wander in). I’m not saying this could happen by the way, I know it does happen.
Hinges on the inside: Which makes it far too easy for someone to pop the pins out to gain access. Sounds basic but this is a common mistake I often see with repurposed buildings.
Tailgating: Following someone through a door before it closes know as ‘tailgating’ is one of the main ways that an unauthorized visitor will gain access to your facility. By implementing man-traps that only allow one person through at a time, you force visitors to be identified before allowing access.
Smile you are on camera :0) You can never have enough cameras - CCTV cameras are a very effective deterrent for an opportunist as is proximity flood lighting. All footage should be stored digitally and archived offsite, don’t forget new GDPR rules BTW.
Access control: You need granular control over which visitors can access certain parts of your facility. The easiest way to do this is through proximity access card readers, biometrics, retinal scans on the doors.
Pre-approval and Personal Identification: Many data centres operate on a pre-approval system whereby you advance warn the DC that someone will be attending site and normally this person will need to show some form of photo ID (driving licence, ID card or passport) Cast iron rule “no ID = no entry” irrespective of how much they protest.
Compound entry control: Access to the facility compound, be that pedestrian or vehicle via a parking lot, needs to be strictly controlled either with gated or turn style entry that can be opened remotely by the reception/security once the person/driver has been identified. Ram Raiders don’t just target retail stores, metal bollards or large boulders can just as effectively act as a protective exterior layer to prevent a vehicle itself being used as a 15th Century battering ram.
Processes and training: This might sound out of place in this list of essentials; but having all the security layers in the world will be worthless unless you have the processes and procedures documented and have your staff vetted and trained to prevent security breaches from happening and this needs to include any 3rd party contract staff you employ.
You can never test enough: It’s only by regular testing and auditing of your security systems that any gaps will be identified before someone else can exploit them.
At the end of the day both cyber and physical security considerations come down to managing risk so make sure you carry out a regular risk assessment, try to think of data centre security like an onion and remember not all Burglars wear balaclavas.
Thank you again to all the contributors in this month’s edition. Next month’s journal theme is focusing on Energy Efficiency and by then I will also be able to report on Data Centres North from Manchester. Deadline for article submission will be the 15th May.
The DCA
W: www.dca-global.org
T: 0845 873 4587
E: info@dca-global.org
Simon Williamson, Business Development Manager, Electronic Access Solutions, EMEIA (SOUTHCO) TBA
In today’s world, data is fast becoming the new global currency and as data volumes continue to grow at an exponential rate, the issue of data security continues to cause concern within the industry. While there is widespread awareness of the many digital attacks that compromise data, less is said about the physical threats to information stored in data centres.
Despite extensive measures in place to secure the perimeter of a data centre, often the biggest threat to security can come from within. It is not uncommon for individuals entering these facilities to cause accidental security breaches. In fact, IBM Research states that 45 percent of breaches occur as a result of unauthorized access, costing over $400B annually.
This issue is particularly prevalent for colocation data centre providers, who host data cabinets for multiple clients. Each server cabinet should be secured at the rack level with access only granted to authorized personnel. Traditionally, access to individual racks has been protected by key-based systems with manual access management. In some instances, data centre managers have turned to a more advanced coded key system, but even this approach provides little in the way of security—and no record of who has accessed the data centre cabinets.
Electronic Access Solutions Enhance Physical Security
To alleviate the problem of unauthorized access and concerns surrounding data security, traditional security systems are quickly being replaced by intelligent electronic access solutions. Above all, these solutions provide a comprehensive locking facility while offering fully embedded monitoring and tracking capabilities. They are a vital element of a fully integrated access-control system, bringing reliable access management to the individual rack. The system also enables the creation of individual access credentials for different parts of the rack, all while eliminating the need for cages, thereby saving costs.
Simplified Installation
Any physical-security upgrade in the data centre has its issues, of course. Uninstalling existing security measures in favour of new ones costs both time and money, which is why data centre owners are turning toward more-intelligent security systems such as electronic-locking swinghandles, which can be integrated into new and existing secure server-rack facilities. They employ existing lock panel cut outs, eliminating the need for drilling and cutting. This approach allows for lock standardisation in the data centre, saving considerable time (and therefore cost)—something that holds real value given the pressing demand for data centre services.
Credential Management
Physical access to the rack can be obtained using an RFID card, pin code combination, BLUETOOTH®, Near Field Communication (NFC) or biometric identifications. The addition of a manual override key lock allows emergency access to the server cabinet. Even in the event that security needs to be overridden, an electronic access solution can still track the audit trail, monitoring time and rack activity. Solutions such as this have been designed to lead protection efforts against physical security breaches in data centres all over the world.
By enhancing security at the individual rack level, providers can restrict rack access to only those with authority, which is especially relevant in colocated data centres, where data cabinets are under threat from both accidental and malicious breaches. Installing the right electronic access solution can help to eliminate costly breaches in a short time-frame and maximize colocation security.
For more information about Southco’s Data Centre security solutions visit www.racklevelsecurity.com.
By Luca Rozzoni, European Business Development Manager, Chatsworth Products (CPI)
Whilst security has always been a key consideration for the data centre industry, the upcoming EU General Data Protection Regulation (GDPR) – a strict set of regulations set to protect data privacy – means that data protection and security policies have taken on a new level of priority.
Regulatory and Compliance Requirements
The GDPR requirements will come into force on 25th May 2018 and affect organisations worldwide. Whilst EU countries must comply, any organisation collecting or processing data for individuals within the EU should also be developing their compliance strategy. The UK Government has indicated that, even taking Brexit into account, it will implement an equivalent set of legislation and UK organisations must review their security practices in regards to the protection of personal data and consider their own routes to compliance.
So How Should Data Centres Prepare?
Whilst organisations are expected to use their own judgment in regards to making sure they have taken the ‘appropriate technical and organisational measures’ to ensure compliance, Regulation (EU) 2016/679 stresses the need for secure IT networks, and provides an example of “preventing unauthorised access to electronic communications networks and malicious code distribution and stopping ‘denial of service’ attacks and damage to computer and electronic communication systems.” Put simply, whilst access control may seem an obvious part of any security policy, data centres must be able to demonstrate that they have the appropriate access policies in place.
Cabinet-level security has always been an important part of data centres’ data protection and security policies. Strict regulatory compliance requirements, such as HIPAA in health care and PCI DSS in online retail, demand audit logs of every access attempt as part of physical access control to help ensure data privacy and security. Automatic logging of cabinet access is also important, given that a large portion of attacks within these industries (58 percent in the financial and 71 percent in the health care, to be more precise) are carried out by insiders advertently or inadvertently, according to a 2017 report by IBM X-Force.
This makes sole reliance on mechanical keys not effective at best and, at worst, has the potential of resulting in privacy related lawsuits.
Electronic access control (EAC) solutions are essential in addressing user access management issues within the data centre and can be an extremely cost-effective method of delivering intelligent security and dual-factor authentication to the cabinet.
Key features to look out for when selecting an EAC solution include:
Dual-factor Authentication
Dual-factor authentication enables data security to be taken to the next level. One of the most secure forms of physical access verification is biometric authentication. However, many organisations have dismissed this in the past due to cost, as it typically requires additional readers to be installed to every cabinet or facility door.
A cost-effective and secure dual-factor authentication solution is a fingerprint-activated card that is able to work with existing EAC or other card-activated locks. A card that is compatible with readers for 125 KHz, HID ICLASS and MIFARE® proximity cards and can work with existing campus security systems eliminates the need for expensive deployments and means data centre employees only need to carry a single card.
Remote Management and Reporting
Using a simple, user-friendly web interface to remotely manage the networked EAC locks allows the user to remotely monitor, manage and authorise each cabinet access attempt. Crucially, using this type of intuitive interface provides an audit trail for regulatory compliance through log reports. The logging report can be easily exported and emailed to the administrator.
Managing the networked EAC locks through the web interface also reduces the need for wiring the electronic access systems to expensive security panels which are usually managed through Building Management Systems.
IP Consolidation
Data centres can realise dramatic savings in networking costs and deployment times through the ability to network several locks through IP consolidation. It is now perfectly feasible to choose a solution that will allow up to 32 EAC controllers (32 cabinets) to be networked under only one IP address.
Combining EAC with Environmental Monitoring
Choosing an EAC solution which offers added benefits, such as environmental monitoring, can ensure a much faster return on any initial investment, especially when you consider the savings which can be made by utilising one IP port for an appliance that offers both EAC and environmental monitoring.
There are solutions available which can monitor and manage both temperature and humidity through the same web interface, issuing proactive notifications to help data centre managers ensure service reliability by taking action before issues turn into downtime.
The infrastructure can be badly affected by water, dust and other harmful particles so it is worth looking for a solution which also has the capability to monitor and detect smoke, water and even motion.
The Future
As outlined in Regulation (EU) 2016/679, ‘rapid technological developments and globalisation have brought new challenges for the protection of personal data’ and ‘the scale of the collection and sharing of personal data has increased significantly’.
As a result, customers’ needs and expectations regarding privacy of data have increased, as has the sophistication of the threats now posed. Data centres must look for more powerful and effective methods of delivering peace of mind to the customer as well as compliance to new and emerging regulations and electronic access control (EAC) solutions are a key weapon in their arsenal. Fortunately, delivering intelligent security and dual-factor authentication to the cabinet is no longer out of reach for organisations needing to meet strict budgets.
Dr. Nigar Jebraeili Research Assistant at University of East London (UEL)
Today we see a small number of progressive (some would say maverick) fully implemented software-defined data centres (SDDC) in operation. While these front runners lead the way, right behind them is a huge number of enterprises being in another vortex which will force them to adopt these new ways sooner than we had expected. This can be seen as a classic IT market development pattern.
Today, roughly 70% of the IT organisations are involved with some form of server virtualisation. This automatically puts the user of such techniques in the top wide section of a funnel & there is only one way for them to go: Down the defined path!
As data centres evolve to embody the new generation of modern IT technologies, they are expected to offer the benefits of fully integrated solutions, provide services for pre-sales and post-sales, allow for continues modifications of standard product and enable easy sourcing, etc. as they offer packages of ultimate reliability and flexibility. However security is a decisive factor that remains the centre of attention for both parties, the users as well as the providers.
New figures suggest that each year, businesses lose $400 billion to hackers! Hence, security remains amongst the main concerns of IT organisations. Choosing how to construct a private, public or indeed a hybrid cloud is probably amongst the most critical strategic decisions to make for IT leaders nowadays and to a great degree it determines the enterprises’ flexibility, reliability and competitiveness.
Today we hear about a relatively new trend in data centres called SDDC (Software-Defined Data Centre). I say a “relatively” new, as the concept is based on the containerisation techniques that have been around since early 1980s. The SDDC’s agile platform not only enables IT organisations to adapt with the high speed of rapid business growth through vitalizing workload, networking as well as storage, but this new concept also can offer a high level of security that was hard to achieve previously.
The question might arise, what are the implications of SDDC on the overall security in Data centres?! To name a few, here I will lightly touch on a few benefits of SDDC that can have a positive impact on the overall security regulations.
As the nature of SDDC implies, generally there are three main aspects of security to be identified:
1. Workload-specific security aspects;
2. Network-specific security aspects;
3. Storage-specific security aspects;
A single configuration mistake in a traditional data centre can lead to a totally dysfunctional data centre, whereas a major advantage of SDDC network security is the unified controller that is in charge of various aspects of the network functionality including the security functions. Hence consolidation of policies can happen across the SDDC data centre infrastructure.
As its main characteristics, SDDC allows for a high degree of automation which minimises the human intervention/ error as opposed to traditional data centres which are inherently error-prone and dependant on brittle physical characteristics, even if using centralised management applications. This is particularly the case where there are recurring configuration tasks and various rules distributed across infrastructure that need manual configuration.
SDDCs on the other hand consistently enforce a high level of policy-based automation that facilitates not only the configuration tasks, but more importantly swift “reconfigurations” of enforced regulatory compliances, which can result in high levels of security and the ability to sustain the rapid changes demanded by today’s business environment and moreover reduce the typical risk of out of date security policies.
Another positive advantage of workload security in SDDC is switching from the traditional legacy security boundary to the SDDC’s functional boundary. It improves the visibility of security software such that it is possible to control the data and the workload’s behavioural patterns, by consistent monitoring, blocking and immediate confrontation of the threats. As a result, it creates a stronger and more reliable security as opposed to traditional legacy security boundaries such as outdated design of DLP (Data Loss Protection) and/or IPS (Intrusion Prevention System) which mostly focus on protecting the borders rather than context and flow. Having said that, adding robust internal layer of security requires consistent policy enforcements, which once again is another affirmation on the importance of automation in SDDCs.
Last but not least, in order to benefit from SDDC’s security advantages it is decisively important to ensure the security models applied to virtualised environment are adequate SDDC-aware security tools, adapted and designed to meet the requirements of these virtualised, centralised and fast-paced environments.
The nature of threats we face change almost as fast as we develop systems & thus the agility with which we respond is the key to our protection. With all the advantages of the new paradigm of SDDC one is still left to find ways of assessing the vulnerabilities. The key becomes the evaluation of the quality of automation, & ease of manipulating central control to adapt to new regulations we wish to impose on the system. Perhaps now is the time to embrace a new vocation that can potentially elevate the industry, by taking the initiation of nurturing & training in a systematic way if we are to make the transition with less risk.
Lexie Gower, Datum Datacentres
GDPR – burgeoning cloud storage – cyber-attacks – ransomware. We live in a digital world with ever increasing digital threats and stringent digital regulations because we are our data. Everything we do as individuals and as corporations creates data and leaves a data trail. This vital information is our strength and our Achilles heel. People responsible for managing, processing and storing data are the gatekeepers to more than just interesting facts, they have the keys to our lives.
A recent survey on behalf of the Information Commissioner’s Office found that only one fifth of the UK public (20%) have trust and confidence in companies and organisations storing their personal information – and in the meantime, the world continues to be a dangerous place, both on the ground and in the cybersphere.
Organisations of all kinds are highly exercised working out ways to ensure they do not fall foul of the DPA or the GDPR, and there is a strong focus on ensuring corporate reputations and customer confidence are not flattened by the negative consequences of a data leak. One drastic solution could be to wipe all the data…. but as the organisation would no longer be a viable functioning entity, it is unlikely to be very popular!
As with all exercises, actual solutions are a mix of people, processes and tools, all supported by responsibility and due diligence. When everything is held in-house, all necessary precautions can be taken, but at a probably untenable cost. Opting for “everything in the cloud” offers the flexibility and power we may want at a more controlled cost, but on the flip side, the security risks are undoubtedly harder to guarantee which may be unacceptable for some data. Hybrid solutions combine the cost advantages and flexibility of the cloud with the ability to apply more rigid safeguards where the information is critical.
Cybersecurity is not just air
Cyber security is the body of technologies, processes and practices designed to protect networks, computers, programs and data from attack, damage or unauthorized access. In a computing context, the topic of security includes both cybersecurity and physical security as, regardless of the solution pursued, at some stage the data will be stored, processed and managed in a physical entity on the ground. Whilst many sparkling brains and enormous investments go into developing the tools and apps that help to shield networks and data from unwanted intruders, the solid physical base needs more than a padlock on the door. Too many data centres treat security as a box-ticking exercise whereas real confidence in physical security can only be justified where multiple layers of people, processes, tools and diligence are invested, implemented, and accredited.
Whilst all organisations are obligated to safeguard their data, those that hold particularly sensitive information are further compelled to ensure they do not slip up, perhaps because the threat to them is greater, or perhaps because of additional regulations or requirements. Whatever the driver, a data centre business that can attract and retain such organisations has to pay considerably more than lip service to the notion of security. Take Datum as a case in point. Built in a secure List-X park, with full perimeter security and CCTV, permanently guarded security gates and highly controlled access, the data centre itself provides further 24x7x365 manned security, CCTV inside and out, building entry controls and biometrically regulated data hall access. For those clients with even greater concerns, dedicated locked cages, and even cages within cages, are provided within the data hall. The overall business model and approach to security, and to service, has ensured that major national and international clients have audited and tested the security before, during and after moving their kit into the data centre.
Accreditations speak volumes
For other organisations who are seeking secure solutions, the tip is to select a data centre that is built and run in the way that you would run the data centre if it was yours. Taking a provider’s word for it is like entrusting your prized Bugatti to a teenager who says they know how to drive. Always ask someone who has let them borrow their prized possession first, and don’t let the keys out of your hand until you have seen both a driving licence and an Advanced Motoring certificate. Data centres can promise whatever you want to hear but client recommendations, third party accreditations and a tyre-kicking trip to site are vital.
By Andy Billingham, Managing Director – EMKA (UK) Ltd
In the area of industrial security new demands drive a continuous development process in tandem with new materials and production technologies. The demand can perhaps most easily be categorised as:
In turn these have an effect on usage of materials and the design concept. In this respect the trend is toward increasing sophistication – it’s no longer acceptable to open a control or data cabinet with a screwdriver if you don’t have the key! So where once a wing knob latch was sufficient it is important to consider the need for keylocks – perhaps to IP65 or even IP69 and the option of vibration resistant compression locks which prevent nuisance door opening, as well as more complete gasket pull-down and consistently higher IP sealings.
Compression lock technology has tended to be rather confined to a limited range of applications until a few years ago it came out of patent protection and now a much wider market is finding it beneficial. EMKA have our own version of this technology and are able to bring our special expertise in design/manufacture to provision of many compression latch variants.
The question is simply “is a compression facility beneficial and where?” Well certainly the anti-vibration role of these locks is excellent in preventing opening of panels such as on trucks, railway rolling stock gensets, aircon and heating and ventilating systems. The answer is increasingly “yes”.
Compression lock/latches are valuable in environments where health and safety are critical in that people must be protected from the equipment they are operating and sometimes even from themselves. This is often a high priority where parallel developments in hygiene regulations have led to more use of stainless steel and designs without cavities which would collect debris and are so more easily cleaned – a move which has also led to increased use of high degrees of IP sealing to resist frequent high pressure washes. Here too developments in manufacturing technologies have enabled stainless steel and engineering plastics to be produced more cheaply, more accurately and with smoother more robust designs.
The ubiquitous ¼ turn latch lock changes incrementally as customers demand smooth, cavity-free designs suited to food processing plants and high sealing to withstand regular pressure washing.
Plastics are no longer in the dark ages and new generations of reinforced engineering grades enable tolerances to be reduced, leading to closer fitting, more robust assemblies which slide more easily with better operator feel and better sealing. These are now often the first port of call for corrosion resistant installations and so can frequently replace expensive metals.
Parallel developments continue elsewhere in enclosure hardware – it is amazing how usage has changed and how products have changed to meet those needs. For similar reasons – enhanced environmental requirements, cost and user friendliness – swinghandles are now produced with “O” rings and PUR seals giving excellent sealing for all applications. Glass reinforced polyamide was introduced as the industry developed slim, strong handle designs alongside stainless steel variants in AISI 304 or 316.
These reinforced machine grade plastics are extremely capable - such that robust anti- vandal designs are possible in these and in zinc die, sometimes in combination with components – often complimented by low profile escutcheons and inset handles for sealing and anti-tamper purposes.
We now have a variety of advanced mechanical solutions such as interchangeable lock cylinders which can be removed and replaced at any point in the installation process. Innovation with regard to mainstream control and equipment cabinets or enclosures is exemplified in the 1325 swinghandle design which takes modular flexibility to a new level in this market. Designed specifically for electrical/electronic cabinets the stylish 1325 enables lock selection even after installation with a complete range of inserts to match common industry requirements.
A significant feature here is this ability to swap the lock mechanism at any stage thus enabling flexibility at production and installation stages – even post installation. Along with a precision rod control system, the complete installation provides a quiet and robust operation resulting from optimal use of modern engineering plastics and manufacturing techniques.
On the question of sealing we can now source pre-cut, pre-assembled and vulcanised gasket, installation-ready without messy cutting and gluing which has significant positive implications for sealing levels. EMC gaskets are mainstream, while a major demand has been identified for fire protection and high temperature gaskets in EPDM and silicone.
With increasing use of technologically driven solutions in all fields of industry the need for basic mechanical security is expanding rapidly and the new biometric/digital/card based systems are finding their way “down” to levels where once a cheaper ¼ turn lock “would do”, but is no longer considered appropriate.
Simple electronically verified swinghandle based protection has been developed, along with networked systems which can be remotely monitored and authorised. The Agent E stand- alone wireless system is one approach for single or small numbers of cabinets – ideal for industrial controls.
In most high security locations security problems begin with the fact that keys and key cards can become separated from their authorised users. Any key or key card that is forgotten, lost, stolen, or otherwise separated from an authorised user represents a potential, undetected security breach, while the greater the number of keys and key cards in a given environment, the greater the possibility of unauthorised access to physical systems or data assets.
At the high tech end data has become extremely valuable so we have seen the need for 100% bullet-proof systems of monitoring, alarm and control leading to biometric technologies being applied at the cabinet door linked often to the internet via encrypted channels.
In these applications there is a vital need for accurate and reliable access logs and for deterrence from unauthorised ingress/vandalism/ theft of data - and this is where the BioLock,
using state-of-the-art fingerprint recognition in conjunction with PIN codes and RFID access cards, provides an extremely high 3 level security protection. This may be applied on an individual cabinet or on a designated block of cabinets with, for example, a group controller supplemented with separate cabinet release protocols. Multiple releases of separate panels on individual cabinets are catered for by means of linked ELock slave units. Ideally suited to the utilities environment, government or financial institutions.
Arguably one of the biggest challenges facing UK businesses in the coming year is the continued confusion regarding access to Low Power Wide Area Networks (LPWANs) that is essential to support the deployment of IoT at scale.
At a time when organisations are being actively encouraged by the government to invest in innovation to drive up productivity, the continued prioritisation of broadband as a digital economy enabler is short-sighted. IoT is a technology that is set to deliver far more value than many of the high bandwidth applications can – and the lack of availability is a concern. All is not lost however. LPWAN roll out is a constantly changing situation, with both network standards and network deployments, licensed and unlicensed, still evolving. It is also a buyer’s market: there are a number of propositions ready that will enable organisations to leverage IoT and gain a competitive advantage. As such, Nick Sacke, Head of IoT and Products, Comms365 explains why it is the independent providers, able to provide access to a blended network model, who will enable not only IoT at scale today, but also provide a long term solution that will drive new levels of efficiency and customer service.
Competitive Disadvantage
The state of IoT in the UK in 2018 is frustrating. While industry giants are making huge investments in hardware, software and database platforms, as well as unlicensed networks – both low power networks LoRaWAN and SigFox – the under investment in cellular LPWAN in the UK is a concern.
While progress in mainland Europe is patchy, with national LPWANs already in place across many countries, including the Netherlands and France, and licensed cellular variants such as NB-IoT being rolled out across Eastern Europe, it is the unlicensed LPWANs - generally LoRaWAN – that are being rolled out fastest.
In contrast, the UK is largely lagging behind: there is no cellular LPWAN (or NB-IoT) technology being rolled out in any shape or form and the unlicensed variant being rolled out by SigFox will not deliver end-to-end coverage before the end of 2018. Where, you may ask, are the UK network operators? The answer, although not in the UK: Vodafone’s NB-IoT project, for example, is being piloted in Ireland and Spain, with no plans announced for any UK deployment as yet.
There is, therefore, a risk that companies will hang back on crucial IoT investment until this confusing situation is resolved.
Need to Blend
However, it is also fair to say that there is, as yet, no single global network that can support all IoT deployment requirements. From cost to scale and architecture, the level of market segmentation globally is creating huge challenges for organisations planning future developments – not least of IoT at scale across national borders.
The problem is that with roll out of both licensed and unlicensed variants typically country by country, there is a clear need for cross-border roaming agreements, something that is only now beginning to be discussed. So what are the options for multi-national businesses that require a seamless, pan-country IoT deployment to achieve, for example, end-to-end cold chain tracking or seamless asset management across Europe? And how can UK businesses avoid lagging behind? To gain the benefits that IoT can deliver, a new model is required; one that can manage and blend a number of different networks, such as cellular and satellite – and agreements to achieve IoT at scale.
But this is a constantly changing situation, with the evolution of both network standards and network deployments, both licensed and unlicensed. Given the potential longevity of these IoT deployments, it is essential to future proof as far as possible. How, for example, can an organisation achieve coverage without the IoT roaming agreements that have been standard in the cellular world for many years? How will the cost vary for different devices when connecting to a cellular versus a satellite network, or an LPWAN? What are the sensor options?
Consultative Approach
Organisations need to embrace a consultative approach in order to understand the new complexity created by a blended network model. Right now, there is a patchwork quilt of: no connectivity, some connectivity and full connectivity. To achieve full coverage, companies must invest in multiple networks to achieve a seamless solution. Plus, with many projects set to last five to ten years, it is essential to avoid tie in to specific networks.
To address this issue, sensor manufacturers are now creating hybrid devices that support more than one network, for example LoraWAN and cellular, giving companies the chance to move onto a new network as it is rolled out, rather than face expensive retrofitting of devices. Hybrid software gateways are also being developed, offering organisations a chance to support multiple networks. There is a cost implication but options are evolving to enable organisations to deploy IoT at scale across a blended network.
What has become very clear over the past few months is that successful IoT deployment now demands a robust ecosystem of expert companies, including sensor manufacturers and service providers, working together to drive both standards and best practice deployment methodology across a blended network model. This ecosystem needs to align, not around a network operator as in the past, but around a service provider able to integrate multiple different network types; phase networks out and in as they evolve; and with the commercial strength to support the customer over the longevity of the contract.
Conclusion
While there is a concern that a lack of direction and confusion is creating a delay in IoT activity, this has become a buyer’s market: there are a number of propositions ready that will enable organisations to leverage IoT and gain a competitive advantage. Blending together the network fabric in a way that ensures the IoT deployment can flex to the new platforms as they are rolled out can enable not only IoT at scale today, but also provide a long term solution that will drive new levels of efficiency and customer service.
With network carriers continuing to focus on single network architectures, IoT at scale will now be enabled by a growth of independent providers able to provide access to multiple network architectures at one point, as well as proactively manage the traffic flow to the right destination as required. Essentially, it is service providers providing a blended network, with a single cost model and a single cross border Service Level Agreement that are set to enable IoT deployment.
Today’s blistering pace of technological change provides small businesses with an unprecedented opportunity to drive efficiencies and reach their audiences in creative, compelling new ways. From apps, to machine learning, blockchain and virtual reality, there are countless innovations promising to transform employee and customer experiences, and boost business performance.
By Dominic Allon, Vice President and Managing Director, Intuit Europe.
There is so much innovation going on, in fact, that it’s difficult to know quite where to start. Small businesses simply don’t have the same amount of cash as larger companies to invest in new technologies without guaranteed returns. This means careful decisions must be made around which technologies to pay attention to: what’s sure to drive revenues, and what’s a gamble that might not pay off?
Here’s my take on some of the technologies small businesses should adopt now and explore next – and what they should avoid doing at all costs.
ADOPT: Apps
There’s a reason why nearly three quarters (71%) of UK small business owners rely on mobile or web-based applications to run their operations, eliminate administrative tasks and grow their firm. It’s because apps provide a simple and cost-effective solution for many of the challenges today’s SMEs face – from offering easy access to information to boosting productivity.
However, with so many apps out there, it’s important to find the right ones to meet a specific business need. There’s no point in using technology for technology’s sake. This can be more destructive than productive for a small business. Two in five (40%) UK small business owners using apps believe there are too many to choose from and are unsure of which are best suited to their business. The key to getting the balance right here is to start with the problem you are trying to solve, and then work backwards to identify which app will make your life easier.Machine learning is a type of artificial intelligence (AI). It focuses on the creation of computer programmes that can teach themselves to evolve and grow when exposed to new data. It might sound futuristic but many of the apps you already use will be backed by machine learning. For example, your accounting app should use data to forecast and help you streamline tasks such as expense management and invoicing.
Beyond this, machine learning could help to make your marketing more targeted, or power chatbots or messenger platforms. You wouldn’t be alone in wanting to explore this trend: Oracle recently reported that eighty percent of brands expect to serve customers through chatbots by 2020.Machine learning could accelerate your business in a multitude of ways, however you must stay focused on the problem you are trying to solve. Access to useful and reliable data, and expertise to guide the application of machine learning, are required – so think carefully before you go all in.
AVOID: Jumping on fadsEnergy bills account for up to 60% of a data centre’s overall operating costs. With demand for capacity getting bigger and bigger, improving operational efficiency throughout server rooms doesn’t just make environmental sense, it’s quickly becoming an economic imperative to stop electricity bills spiralling out of control.
By Chris Cutler, Corporate Account Manager, Riello UPS.
According to research from the Global e-Sustainability Initiative (GeSI), datacentres already consume over 3% of the world’s total electricity and generate 2% of our planet’s CO2 emissions. For context, that’s the equivalent of the entire global aviation industry.
And those requirements are certain to come under increasing pressure by the relentless rise of the ‘Internet of Things’ and Industry 4.0. Interconnectivity is quickly becoming the rule, not the exception, with independent research body Software.org predicting there’ll be more than 50 billion connected devices by 2020.
All these extra smart machines and gadgets will place huge demands on data centres – those terabytes of additional information must be safely stored somewhere. But with a creaking National Grid struggling from decades-long lack of investment, it’s not simply a case of increasing electrical capacity to meet these growing needs. The data industry is tasked with doing more with less, which is why improving energy efficiency will have an increasingly important part to play.
Datacentres consume electricity in two principle ways. Firstly, the energy needed to power all its ICT equipment and servers. Then there’s the vast sums of air conditioning necessary to keep those machines operating safely.
Significant progress has undoubtedly been made in cooling technologies. These advances are necessary too, what with air conditioning accounting for up to half a data centre’s total power use, depending on the size and climate. But those efficiency gains on their own won’t be enough.
The Move To Energy Efficient Modular UPS
Fortunately there are similar savings to be made through another indispensable element of a data centre’s infrastructure – its uninterruptible power supply (UPS) system.
Until recent years, data centre UPS’s were typically large, static systems only capable of optimal efficiency when carrying heavy loads of 80-90%. There was a tendency to oversize capacity during initial installation to provide the necessary redundancy, meaning that many power protection systems were wasting masses of energy by continuously running at low, inefficient loads.
Just as cooling equipment has developed, so too has UPS technology. Modular systems – which replace sizable standalone units with compact individual rack-mount style power modules paralleled together to provide capacity and redundancy – deliver performance efficiency, scalability, and ‘smart’ interconnectivity far beyond the capabilities of their predecessors.
The modular approach ensures capacity corresponds closely to the data centre’s load requirements, removing the risk of oversizing and reducing day-to-day power consumption, cutting both energy bills and the site’s carbon footprint. It also gives facilities managers the flexibility to add extra power modules in whenever the need arises, minimising the initial investment while offering the in-built scalability to “pay as you grow”.
Transformerless modular UPS units generate far less heat than static, transformer-based versions and need significantly less air conditioning too. They are also smaller and lighter, so have a significantly reduced footprint, and are easier to maintain because each individual module is ‘hot swappable’ and can be replaced as and when required without the whole system having to go offline.
Another benefit of the move to modular is that the units easily integrate with Energy Management Systems (EMS) or Data Centre Infrastructure Management (DCIM) software, transforming them into networks of ‘smart’ UPS’s that constantly collect, process, and exchange performance data including operating temperatures, UPS output, and mains power voltage.
This information is used in real-time to help constantly optimise the system’s performance, as well as highlighting other areas where additional efficiency savings can be made. In hyperscale datacentres where UPS’s can be spread across several sites in different cities or even countries, some in unmanned facilities, this connectivity combined with the ability to remotely monitor performance enables loads to be optimised, minimising the amount of energy wasted.
Modular UPS In Action: £335,000 A Year Electricity Savings
A prime example of the savings achieved through upgrading to modular UPS units can be found in one of our most recent projects. We teamed up with electrical contractors The Rosebery Group to completely overhaul the power protection systems at two datacentres belonging to one of the UK’s biggest consumer goods suppliers.
The existing system was originally installed in 2007 and consisted of large, static 400 kVA and 800 kVA units that were operating incredibly inefficiently on low loads ranging from 12-25%. Overall UPS efficiency averaged just 92% and was as low as 89% in the main switchroom, meaning vast amounts of energy were being wasted.
And because the units were so large and generating such vast amounts of heat, air conditioning costs were considerable. 414 kW of energy a year needed just for cooling, leading to annual bills of more than £315,000.
We replaced this dated and inefficient system with our transformerless modular Multi Power units. Configured to more closely match the power requirements of the data centres, UPS efficiency increased from 92% to 96% across all load levels.
The project’s overall cost and carbon savings have been substantial. The total outlay for running the UPS and cooling across both sites has been cut by a staggering £335,000 a year. Air conditioning requirements alone have been slashed by nearly 72%, resulting in annual energy savings of 297.3 kW.
The client benefits from overall annual energy savings totalling approximately 1.25 million kWh, enough to power 316 typical UK homes for a year. While carbon emissions across their two sites have decreased from 2,147kg to 603.5kg, a huge reduction of 71.89%.
All these environmental and economic improvements have been delivered in less then half the previously needed space, as the overhaul has resulted in a 59% per m2 reduction in footprint. Perfect proof that more can indeed be done with less – greater power density and better efficiency, using less electricity and space.
While this particular project obviously focused on a mega-sized datacentre, the lessons are clear for sites of all different configurations. The initial expense of upgrading a power protection system to a modular UPS will be paid back handsomely through markedly improved energy efficiency, leading to measurably positive impacts on both a facility’s corporate social responsibility obligations, and more importantly, their on their bottom line and day-to-day running costs.
With more than a decade’s experience in the critical power protection industry and a proven track-record in the datacentre sector, Chris Cutler is Corporate Account Manager for Riello UPS. He has particular expertise regarding large-scale 3 phase UPS installations and the topic of UPS energy efficiency.
Riello UPS Ltd is a leader in the manufacture of uninterruptible power supplies (UPS) and standby power systems from 400VA to 6MVA. The company is part of the Riello Elettronica group which has support offices in 80 countries.
Riello UPS products combine engineering excellence with high-quality performance and energy efficiency, to enable reliable power for a sustainable world. The product range includes 22 solutions for powering the smallest desktop PCs to the latest supercomputers used in advanced data centre operations.
The UK branch of Riello UPS is located in North Wales, operating from large, purpose-built premises comprising office and training facilities as well as a fully-stocked warehouse. This enables an end-to-end service of comprehensive technical support and fast product dispatch. For further information visit www.riello-ups.co.uk
Choosing a new CIO is a major decision for all organisations that understand the transformative power of IT. Late last year one of our customers invited me to be a member of the panel interviewing short-listed candidates for their CIO, and this highlighted just how much the role has changed in recent years.
Richard Blanford, managing director, Fordway.
At one time you might have described a CIO as an organisation’s ‘top techie’. Their role was primarily to ensure that IT ran smoothly, understanding all the details of their corporate network and applications. However, today’s CIO requires much more than technical knowledge to fulfil their role. IT has tremendous potential to change business and provide competitive advantage, from introducing new applications and ways of working to using business data to provide transformational insights. This is where an effective CIO can really make a difference, ensuring that his or her organisation chooses the right technology at the right time and uses it to obtain competitive advantage.
Realistically I believe that the CIO now has three jobs, which in larger organisations are now being formally separated:
1) Keep the organisational IT service running efficiently and effectively (Head of IT)
2) Lead and assist business improvement and transformation, as most involve process or information digitisation (CDO/Head of Transformation)
3) Provide insight and value to make better business decisions and potentially provide new products and services (‘true’ CIO).
There is of course an ongoing debate about whether the CIO should have a place on the board. We finally seem to be making progress here; according in a recent article in the Harvard Business Review, CIOs are the fastest-growing addition to the boardroom. So what skills does the modern CIO need to equip them for a seat at the top table?
My customer had already recruited a head of operations to manage day-to-day delivery of their IT services, so hands-on technical expertise was not required. They wanted someone with the ability to take a leadership role, not just inspiring their team but engaging the rest of the organisation and helping it to move forward.
Their CIO needed the ability to drive change – particularly important in a sector which had not historically been an IT innovator – and to solve the inevitable problems that would arise along the way. This would not be done by rolling up their sleeves and understanding the technical details, but by convincing colleagues across the organisation to buy in to the benefits of new ways of working. This particular organisation had made a substantial investment in IT, moving services to the cloud and introducing mobile working, and it was vital to ensure that these investments delivered the promised benefits.
As a third party, with an in-depth knowledge of how the organisation operated after working closely with them over many months, I was able to assess both the candidates’ skills and how they would fit with the existing leadership team. Did they have the spark that was needed to have an impact and clearly explain business benefits to non-technical colleagues, to keep the organisation moving forward and continue its path of innovation?
We found ourselves debating whether having the required skillset and personal qualities was more important than specific sector experience. Would someone with the appropriate knowledge and visionary qualities be able to win people over and move the business forward, despite not at first understanding the nuances of the business? On balance we decided that the ability to lead and inspire, accompanied by the relevant knowledge, were the key factors.
An effective CIO does not necessarily need sector experience but must have the ability to ‘get under the hood’ of the business, understanding how the organisation functions and ensuring that its IT strategy is not only in line with existing business strategy but has the potential to drive future business transformation and innovation. This ability to pick up on the trends and issues within the organisation in order to harness IT to drive efficiency and productivity is why CIO is becoming a board level role.
I believe that we will increasingly see the in-house IT team focusing on strategy, policy and bespoke applications; buying in core services, technical support and administration from third parties on an as-needed basis. With everything from infrastructure to software now available as a service, the CIO will become much more of a business role, managing risk and looking for new ways to use technology to gain competitive advantage.
The internet of things – you’ll have heard of it. Along with the smart TVs, smart fridges and smart phones that make it up. But what about the boxes all these shiny gadgets arrive in? That’s just cardboard right, quite separate to the cool future-tech?
By Teemu Salmi, CIO at Stora Enso .
Wrong. Intelligent, connected packaging can transform businesses, protect consumers, reduce waste across society and even facilitate brand new types of business models. So, while connected appliances, cars and other electronic devices tend to be in the centre of IoT discussions, the thing that may have the biggest business impact is in fact the ‘Internet of Boxes’.
Better boxes, better business
Picture a warehouse: shelves piled high with pallets full of boxes. Workers with clipboards wander up and down the aisles counting them off. Later, they are picked, loaded onto trucks and distributed to stores and consumers. There, smaller boxes are unpacked from the large ones and stacked on the shelves by more workers with more clipboards. At the end of the day, inventories are taken, shelves refilled and the boxes checked again.
Even if the workers involved are working on tablets rather than clipboards, allowing data to be automatically fed back to central systems rather than manually inputted, there’s a lot going on in this picture. A lot of different steps, a lot of places where data can go missing, and crucially, a lot of dark periods in between check-ins where very little is known.
Intelligent packaging can change that. Linking up radio frequency ID (RFID) tags - which now cost cents rather than Euros – with cloud-based services enables a digital and transformative packaging eco-system. A warehouse or store employee can take inventory almost instantly without any manual intervention. Through the simple press of a button data stored in the cloud is generated, analysed and translated into actions. As a result, businesses can receive real-time information on how much stock they have and where it is at any time, thus drastically reducing the time spent on accounting and checking while at the same time increasing quality of information.
Box forts – protecting brand owners and e-shoppers
But there are other benefits too. Intelligent packaging isn’t limited digitalizing warehouse inventory; it’s also becoming increasingly affordable to incorporate sensors into e-commerce packaging using NFC (Near Field Technology). NFC-enabled smart phones can open up a variety of intelligent applications pertinent for end-consumers and brand owners alike. With NFC, smartphone users can simply tap an RFID-tagged product to automatically access a product-specific website or offering This unwraps endless digital marketing opportunities for brand owners looking to push content to its online customer-base.
Moreover, something as humble as the cardboard box can take on theft, damage and counterfeiting. Between them, these issues cost business and consumers millions per year. For example, research from Global Financial Integrity estimated in 2017 that the global trade in fake and pirated goods was worth approximately $923 billion to $1.13 trillion.
Boxes with NFC-enabled RFID tags can provide real-time location tracking, alerting a company when something goes somewhere it shouldn’t, helping prevent theft. The customer can then, through NFC technology in their smartphone, scan the box on receipt of goods, guaranteeing that they are authentic and not stolen, giving them peace of mind. This becomes especially applicable when ordering high-value items online, whether it’s electronics, jewellery or designer handbags.
By combining RFID tags [ST1] with increasingly affordable sensors, you can also look at problems such as tampering and damage in transit. For example, a motion sensor could detect if a package had been dropped or tipped beyond a certain angle. It would also be possible to tell whether a package had been opened or a seal broken. There are even sensors capable of telling whether perishable items have spoiled, ensuring only fresh food is delivered and sold. These applications could both help the company by alerting them before the sale, while allowing customers to ensure that the product they are about to buy is of the expected quality.
A smarter, better environment
It’s already widely established that wood-fibre-based packaging is a superior alternative from an environmental perspective compared to plastics and other non-renewable materials. Paper board boxes origin from the most renewable of resources, trees that regrow and are both recyclable and bio-degradable. By adding a layer of intelligence to the cardboard box unnecessary waste can be even further reduced.
Better stock management can avoid over-ordering, avoiding wasted packaging and products – especially important when the produce is perishable. Add smart packaging that can detect spoilage and you have a powerful proposition for avoiding unnecessary refuse.
The effects of digitalized packaging solutions could be even more transformative when it comes to recycling. According to a WRAP 2016 survey, two thirds of UK households were unsure which bin to put at least one item in, nearly half threw away something that could have been recycled and just over two thirds put something in recycling that they shouldn’t. But what if, by just passing their smartphone over the package, the consumer could instantly tell which elements were recyclable in their area and where facilities might be? Sellers could even throw in some recipes while they’re at it (and that’s not to mention the myriad marketing opportunities).
Brave new world
Above and beyond improving existing businesses and processes – intelligent packaging could even be the facilitator of entirely new kinds of business.
For example, retailers are already dabbling with entirely digital stores, allowing consumers to simply pick up their items, pack them and walk out. No checkout required. The general idea here is that the store automatically registers the items a consumer has chosen, recognises the customer and directly charges a linked bank account. Amazon is already testing this concept out in their native Seattle and in Japan the ambition is to have nationwide self-checkout system in place by 2015. Given the savings and convenience factors, it’s likely a concept that is here to stay and RFID- and NFC-technologies could be instrumental in speeding up widespread deployment.
Packaging may appear a basic commodity distant from a gleaming Silicon Valley techtopia. But it’s not always the flashiest gadget that has the most transformative effect when it comes to improving existing businesses and facilitating new ones. As our world becomes more connected and businesses and consumers start to really explore what that means, the internet of boxes might just be one of the most important developments of all in the transformation towards fully digital society.
[ST1]We should not use the word “Chips” but rather “Tag(s)”. the chip is part of the tag
Industry watchers predict 2018 will bring a wave of bigger, faster flash storage, hyperconverged infrastructure, and software-defined storage to help manage the influx of data headed their way. With all of that taking place, it’s no wonder many organisations are looking to the cloud for answers.
By Chris Adams, President and COO, Park Place Technologies.
Cloud computing is evolving and maturing, with more attributes of the technology becoming clear. One emerging point of view pertaining to the cloud is that it is ideal for highly variable workloads and volatile systems. For example, cloud computing is great when a business wants a new application for a specific project, but it becomes too expensive over time if that application will be used over the course of a couple of years. This attribute of the cloud creates an environment in which businesses particularly benefit from keeping some systems on internal servers to maximise the value of technology assets.
Hype and conjecture aside, cloud computing is prominent, but it isn’t replacing internal data centres in entirety. As such, IT managers need to be prepared to adequately support internal hardware, but possibly with fewer resources because fiscal and strategic efforts are being placed on cloud plans. The end result is a different data centre environment from that which many IT teams are accustomed. Major operational and technical changes may be needed to support mixed cloud and traditional data centre environments, but IT leaders also need to think about secondary tactics, like hardware maintenance.
With that said, in the round-up of 2018 predictions earlier this year, Gartner went on record with “Top 10 Technology Trends Impacting Infrastructure & Operations.” Among them is the expectation that caution will remain dominant when it comes to cloud adoptions.
In the opinion of the research group “the journey to the cloud is a slow, controlled process” for many enterprises. They mention colocation and hosting providers offering private and shared clouds as safe spaces, of sorts, to supply basic cloud capabilities and woo reluctant organisations into the arena.
Other sources are confirming that a measured approach is the modus operandi of most IT organisations. Even as cloud hype intensifies and successes have become entrenched, industry watchers still see enterprises:
Notwithstanding the “baby steps” approach of many tech leaders, the cloud is set for big growth. Forrester predicts over 50% of global enterprises will use at least one public cloud platform. And more and more IT organisations are getting into cloud-native capabilities, containers and microservices, and other technologies.
IT is in a position of trying to have it both ways, using the cloud but keeping on-premises technology as well. In fact, a recent survey found such a hybrid approach characterises 58% of organisations, with a remaining third of enterprises relying on 100% in-house technology.
Although few businesses expect to ever move cloud apps back to the data centre, reservations about the cloud mean they aren’t rushing everything off-premises, either.
In the “long live the data centre” camp, some industry insiders point to declining costs of IT hardware as a key factor in increasing comfort with running strategic software assets on fully owned equipment. The trickle down of commodity servers, they say, is making data centres cheaper. We’d mention third party maintenance, which can save customers on hardware support, as another factor making on-premises technology more affordable as well.
On the whole, these savings are frequently cloud-positive, freeing resources for innovation, much of which will happen in the cloud. Thus, contrary to claims by some “cloud only” advocates, IT executives who see the value in on-premises are not all “server huggers” who can’t let go. The vast majority are realists who compare technologies on their merits and costs and are perfectly happy to select the right one for the job.
Cloud strategies and traditional data centre environments are not mutually exclusive. In fact, many organisations find that they need to continue maintaining an internal data centre while moving some applications, particularly transient ones, into cloud environments.
As it stands, beyond the hype cycle, it seems IT are, in fact, ahead of the game in finding ways to support robust internal data centre assets while also enabling innovation through the cloud. Now just to maintain that environment…
why local knowledge could be make or break for your global data centre plans
It’s a well worn argument that the data centre now sits firmly at the heart of any organisation. Getting the data centre strategy right means that a company has an intelligent and scalable asset that enables choice and growth. However, get it wrong and it becomes a fundamental constraint for innovation.
By Darren Watkins, managing director of VIRTUS Data Centres.
Many organisations must now ensure their data centre strategy is ready and able to deal global expansion - remaining competitive and cost efficient, while being primed for growth. In this climate, being empowered to choose who runs your data centre, and how it’s run, is absolutely crucial.
For some time, it looked as though the choice of global partners was limited. While more than 1,200 companies currently provide global data centre services, just three organisations have over $1bn revenue and only 19 have facilities across three continents or more. But a new breed of global players, which aren’t the typical big corporates, are now promising to provide much deeper local expertise while maintaining the ability to operate on a global scale.
For us, this partnership model, made up of an alliance of local organisations, each leading the field in their region, is the answer for many. Local roots mean the data centre provider knows the customer, business culture and regulations in every region enabling them to provide a much more nuanced approach to service delivery. They will also have strong existing relationships with a variety of local carriers and have developed deep ecosystems. It’s this local knowledge that enables these providers to work closely with their customers at a local level, whilst still benefiting from a global network.
Good for the customer, good for the industry
The best alliances are two-way relationships based on a genuine understanding of each other's capability, strengths and connectivity – which ensures the most appropriate data centres are secured, and business capability and resilience is improved.
This offers clear benefits to a customer, but perhaps even more importantly, networks are also beneficial to the industry as a whole. They can raise professional standards all over the world, facilitate knowledge sharing for the benefit of alliance members (and beyond) and become the global voice for best practice in an increasingly complex market.
Collaboration is key to moving the industry forward and powering continued innovation, and its data centre alliances which are best placed to facilitate this. The ability to share best practice design, operation and management, to research and evaluate new technologies and initiatives together - and to share insight and experience to improve programmes of education - not only benefits a single customer but every customer.
This is at odds with the model of a large global organisation where a walled garden approach to innovation and development is required to maintain competitive advantage.
How to pick a partner
So how do organisations make informed decisions about their data centre partner when doing business on a global scale? From our experience, there are several key factors which companies must prioritise - whether they pick a global provider or a smaller, bespoke, alliance.
Cost is clearly a priority for any organisation. Bespoke alliances can reduce cost aspects of many elements by negotiating with their suppliers to ensure they can get discounts. In country, local knowledge and long standing supplier relationships of the alliance data centres can achieve further cost benefits, passing these on to customers.
Companies must also ask how easy it is to do business. We know that providers promote the simplicity of the homogenised global single contract or Managed Service Agreement (MSA). But we’d urge organisations to look closely at the small print. A business could be contracted with a data centre in London, but that contract may also require sign off at an EMEA level and, if it’s part of a global account, then it will more than likely be reviewed at a global level too. This can add a lot of time, effort, resource and pain for the customer because they can’t move as quickly as they need to, diminishing the value of working with a single, global organisation.
There are plenty of benefits of employing this model to global expansion - from a better understanding of local regulation to going the extra mile to ensure connectivity routes. For example, as part of the ST Telemedia Global Alliance of data centres, VIRTUS is able to offer local expertise and all the benefits detailed here. The alliance operates in China, India and Singapore from a portfolio of over 250MW across 50 data centres.
But although our money is on the partnership model, over the large corporate giant, even more important is that organisations now have choice. And with the data centre now strategically important, being empowered to choose who runs your data centre is absolutely crucial.
The shift towards hybrid data center environments, consisting of a mix between off-premises services, public cloud and colocation, and privately owned, distributed IT facilities, is challenging traditional approaches to physical infrastructure management. A recent study by 451 Research brought some interesting insights to light.
The study was targeted at hybrid IT environments within large enterprises from across the globe, and conducted through intensive interviews with C-suite, data center and IT executives. Sponsored by Schneider Electric, the participants were drawn from companies generating over $500 million in revenue across the US, UK and Asia Pacific. The complete report provides an in-depth analysis of the topic together with and additional observations about trends that are emerging across multiple verticals and industries.
The interviews highlight how the widespread adoption of cloud services has significantly impacted the way companies are meeting their data center infrastructure requirements. These complexities will be compounded by an anticipated groundswell of new distributed IT driven by the Internet of Things (IoT) and emerging edge computing workloads.
Edge computing
Edge computing deployment presents unique challenges differing from those of traditional data centers. They are often remote and without local IT staff support. They require a different strategy as their lifecycle is longer and they must be easy to manage, secure and deploy while also being resilient.
To realize the full value of a hybrid approach, the management of a combination of data center environments has become one of the most complex issues for modern enterprise leaders. The study also revealed common themes:
According to the new 451 Research study, operators of enterprise data centers face a rapidly evolving technology landscape and a cloud-powered wave of disruption that is changing business models, connectivity and workload management. Driving this change is the growing availability and adoption of off-premises services, such as public cloud, colocation data center offerings and Datacenter-Management-as-a-Service (DMaaS) like Schneider Electric’s EcoStruxure IT architecture
DMaaS enables optimization of the IT layer by simplifying, monitoring, and servicing data center physical infrastructure from the edge to the enterprise. Utilizing cloud-based software, it promises real-time operational visibility, alarming and shortened resolution times without all of the costs associated with deploying an on-premise DCIM system.
DMaaS is positioned in the broader context of IoT technologies and platforms, and its message aligned to resonate with a broader audience beyond the traditional data center manager. The ability to benchmark performance and set key performance indicators (KPIs) based on data center metrics will interest organizations in the midst of making decisions regarding data center and workload placement in an increasingly hybrid and distributed IT landscape.
Piloted in the U.S. DMaaS has already been used for benchmarking IT environments with more than 500 customers, 1,000 data centers, 60,000 devices and 2 million sensors. Customer feedback on the results of their implementations affirms the growing need for a cloud-based data center management solution.
451 Research concludes: “By 2019, organizations anticipate that just under half (46%) of enterprise workloads will run in on-premises IT environments, with the remainder off-premises, according to 450 enterprise respondents in our 2017 global study. Clearly, hybrid IT environments have become the norm.”
Vendor agnostic platform
Early adopters of DMaaS are already seeing results in their businesses and with their customers. Daniel Harman, Building Automation Systems Engineer at Peak10 + ViaWest, Inc explained why the cloud-based approach was the best data center management solution for his business. “ViaWest is trusted to deliver hybrid IT infrastructure solutions spanning colocation, interconnection, cloud, managed solutions and professional services to more than 4,200 customers. We chose a vendor agnostic DMaaS solution to provide one platform to monitor all the different devices in our data centers...”
Case Study – Bainbridge Island School District
Bainbridge Island School District chose DMaaS to help ensure continued availability of its innovative digital learning environment. With limited resources to manage its distributed IT and data center it provides one tap visibility to all device data, smart alarms and data-driven insights plus 24/7 digital monitoring and troubleshooting.
Network supervisor Alan Silcott says that DMaaS solutions “give me that peace of mind to know that if there is an incident, kids can continue to learn and the classrooms can continue to operate until the school day is through.
He goes on to say that DMaaS allows him “to check the status of all my data closets from my phone, at any time, in any location. It helps to know exactly where the problem is as opposed to trying to decipher it from a flood of emails.
“We have 11 buildings, 9 different schools with a data center of 35 different data closets. Technology is in every aspect of the schools. If our network were to go down for a day, it would cause serious disruption to our learning.
“If we get even just a power flicker, all of our UPSs send notification emails. It’s hard to sort through all those messages and make sure that everything comes back online safely.” Alan Silcott says that now when there’s an issue, the solution provider, “has our backs and contacts us immediately.”
Six Real-World Approaches to Managing Hybrid IT Environments – a report by 451 Research – can be downloaded from:
https://www.schneider-electric.com/promo/us/en/getPromo/75574P
I was shocked to learn how far the UK is behind its European counterparts when it comes to productivity, with output during the past decade being its lowest since the 1820s.
By Michael Haddon - CEO - Kradle Software.
These findings, released recently from the Office for National Statistics got me thinking about the potential causes of this slowdown, which are still not entirely understood and could be caused by any one of a number of complex issues – the working environment, culture, training and the technology tools businesses have available for example.
Naturally, for any SME business owner, stories around lagging productivity and staff performance are a cause for concern. But how then, can employers really ensure they are getting the most from their team? Perhaps this is my inner sportsman influencing me here, but what if, just like with the British Cycling team, it requires some science and rigour?
Prior to the current controversy now surrounding the sport I have long been considering Sir David’s approach to marginal gains and the philosophy applied by the team. This is perhaps more topical now than ever before, yet irrespective of personal views on how this story is unfolding – I think there are some interesting strategic lessons to take and apply in a business context.
The Theory of Marginal Gains
When Sir David Brailsford took over British Cycling in 2002, he believed that it was possible to increase competitive advantage if he and his team were able to break down every element that went into achieving Gold on a bike, and then improving each by 1%.
He is a man that believed in, and rigorously followed a process for success that worked. His team scrutinised the foods that should be included in an ideal diet for a cyclist, made tiny adjustments in aerodynamics, worked on increasing the power needed at the start of a race and on how to build endurance for all riders. When Sir David first took the role, he certainly didn’t have all the answers, but in following his process, he and his team effectively created a blueprint on how to win in a cycling race – no matter how competitive the world stage may have been. It was a strategy that proved successful with the British Cycling team now widely recognised as being the best in the world.
Breaking Down Success – Step by Step
There are certainly a number of interesting strategic lessons that can be applied into a business context here. With some lateral thinking after all, it is possible to break down the critical components needed for success in just about any business.
Yet my conversations with small and mid-sized business leaders tell me that this part isn’t necessarily the issue. Most business owners DO understand the steps required to succeed; they just don’t know how to measure the potential increase in marginal gain (the equivalent of unlocking Sir David’s 1%).
Most organisations have processes and workflows in place to ensure activities are carried out accurately and consistently but more often than not they are muddled through on paper or are spreadsheet-based. Few facing the challenge of growth and scaling up will take the time to codify their workflows in a way that really allows for measurement and the kind of marginal improvement needed to tackle the productivity gap. And unless the business is a large enterprise, finding a custom coded piece of software that supports this has been an expensive and complicated task.
So the key to ensuring success, when applying the theory of marginal gains into the corporate environment is threefold…
1. Define and map processes. Try to encapsulate them in a set of simple, flexible workflows that will allow for comprehensive reporting.
2. Use the data from the team working on each workflow to assess and gauge performance; improve the workflow where problems are identified and identify any bottlenecks where they exist.
3. Repeat and improve continuously.
Define, Measure, Improve
This process of measurement and continuous improvement will deliver those marginal gains for business owners if they can find and adopt a suitable platform to manage their workflows without incurring the multi-million pound spend required for enterprise business management tools.
Sir David’s words then, that we should “forget about perfection; focus on progression, and compound the improvement” are just as relevant in the business context as they are inside the velodrome. A 1% improvement might not be instantly noticeable, but it could make a big difference in the long run, especially if the results are cumulative and progressive.
And it is at this point where I wonder whether it is this very mentality that could help in improving productivity for UK businesses? I won’t claim to know the answer to that for certain, but it’s certainly thought-provoking and worth exploring.
Nothing in life is ever guaranteed. Be it a sports team lifting a trophy, or a bus running on time. This inevitably stems to the world of IT hardware. For all the ambiguity and uncertainty surrounding the future impact of technology, unwavering confidence in the growth of big-brand enterprise IT hardware in 2018 may be justified. To a certain extent at least.
By John Jainschigg, content strategy lead at Opsview.
This was brought to my attention recently after Morgan Stanley published a research note which highlighted the details of a ‘perfect storm of conditions’. The note naturally amassed great interest and debate.
So, what exactly are these supposed factors that have combined to produce Morgan's "perfect storm for big-brand enterprise IT hardware?"
Plenty of Cash
Morgan makes a convincing case that more cash is now available for private cloud investment. The largest US enterprises are being nudged by a range of financial dynamics influencing cash availability and promoting aggressive spending, including:
This positive outlook is not just restricted to the US. For example, in China, a renewed global IT hardware spending is already predicted by Forrester to grow at a rate of 6%. Rapid 2018 growth in enterprise IT spending is also expected to occur in Japan and India.
Hybrid, not Private
Stanley stresses that reduced enterprise spending for on-premises hardware reflects not so much a desire to abandon private datacentres, but just to pause further capital investment while figuring out where public cloud fits into enterprise IT strategy.
Is this an out-take from the Private Cloud and Datacenter Hardware Wish Fulfillment Playbook, or does it (also) make sense? Yes, enterprise use-cases are diverse: from delivering predictable, commodity IT services with long lifecycles, to responding with agility to fast-changing market and seasonal demands, competitive pressures, and the requirements of hair-on-fire skunkworks development projects. Public clouds can answer well (so goes the conventional argument) where demands are variable, intermittent, unpredictable, and/or novel. But where things are more predictable, private clouds are often cheaper; especially at the large scales at which global enterprises work.
Is the marginal cost advantage of hosting predictable apps on private clouds sufficient to compel increased enterprise hardware spends? We’re not convinced that this outweighs the notion – long used to promote IT outsourcing, public cloud growth and related phenomena -- that enterprise IT needs to justify its existence by producing differentiating business value. In a truly biz-value-forward IT cosmology, you don’t put boring, predictable, commodified (but also mission-critical) applications on private clouds. You rent them from SaaS providers.
Helping Morgan Stanley Make a Better Case
Stanley ends by suggesting that containers, automation and the Internet of Things (IoT) may help drive enterprise demand for on-premises hardware. Had Stanley built further on this idea, it would have made their case much more compelling.
Public cloud costs are not suitable for a company of a decent size, and prove to be very costly. But for private clouds to grow – as Stanley says they will – they need to find a greater purpose than just hosting predictable, commodified applications more cheaply than Amazon. To be fully viable in enterprise hybrid clouds, private clouds also need to offer a public cloud experience in the on-premises datacentre.
As Stanley subtly suggests, containers, automation, and other innovations are now making this happen, and IoT and other novel technologies are accelerating and enabling the change.
IaaS Solutions for Hybrid Cloud
Large enterprises with strong commitments to Microsoft and/or VMware may look no further than products like the newly-delivered Azure Stack or the VMware/AWS partnership to turn on-premises and public clouds into a continuous Infrastructure-as-a-Service substrate, managed through single, familiar panes of glass. Though costly, these solutions provide full lifecycle management of the cloud framework, and can deliver the full spectrum of hybrid cloud benefits, including seamless workload portability and bursting,
Similarly-conceived, but more open solutions, host highly-available control planes for OpenStack and Kubernetes and handle lifecycle management, while letting you deploy compute/worker nodes in private datacentres and on popular public cloud platforms. These, too, enable single-pane-of-glass management and portability of workloads between private and public-cloud regions.
Hybrid Container Cloud
More generally, Kubernetes is emerging as a leading tool for standardizing container workload hosting across private and public datacenters. Every public cloud provider now offers Kubernetes-based Containers-as-a-Service, and numerous third parties (such as StackPoint.io) offer ways of automating and lifecycle managing Kubernetes clusters on public clouds. While it’s still somewhat challenging to deploy production Kubernetes in-house, projects like Kubespray are making things easier: providing a generalized solution for getting Kubernetes on premises-based or public-cloud-hosted IaaS virtual machines or on bare-metal nodes.
With Kubernetes running on premises and on one or more public cloud providers: problems with workload portability mostly vanish. You can deploy the same app to any compatible Kubernetes cluster using the same automation and manage it via identical lifecycle management code, CLI commands, and WebUI operations. Using Kubernetes federation, you can sync resources across public and private clusters, enable cross-cluster discovery, and thus easily automate key hybrid cloud use-cases, like bursting.
Climbing the Value Stack
If you prefer, you can climb higher up the value stack and deploy serverless compute to create and manage even lighter and more-portable functional solutions for high-throughput real-time processing of time-series or other metrics from sensors and other IoT devices.
At Opsview, we’re investing in automatic monitoring and predictive analytics, making it easier to implement monitoring in Kubernetes, serverless compute and other complex frameworks, keep track of their health, and gain rapid, useful insight into the often-transitory workloads they execute. They can significantly reduce the effort, skill level, and headcount required to manage these platforms proactively, along with the critical business services they support.
With all these opportunities and trends in play, Morgan Stanley’s prediction that a bounce is coming for enterprise datacenter hardware seems more credible. Unfortunately, whether it will happen in 2018 or somewhat later –is still an open question.
Few challenges that the CIO has faced in recent years are as big as taking control of the cloud. In the past, it was easier to manage on-premise applications, but the cloud quickly did away this kind of control.
By Mark Flexman, General Manager UK & I, Fruition Partners.
To highlight this, research commissioned by Fruition Partners has consistently showed that CIOs had a range of serious concerns over cloud control. The problem is, many CIOs don’t appear to apply the same comprehensive IT service management (ITSM) processes to cloud as they do for in-house IT services. Comparing the results of the last two pieces of research, worryingly showed that while there has been a slight improvement in overall cloud maturity, there’s still got a long way to go.
One of the key issues is the adoption of public cloud leading to reduced control over the IT services organisations are using. There are some concerning negative impacts of this trend, for one, it leads to financial waste, but it also increases the business and security risks that the organisations face. In an age where businesses already face cybersecurity threats from every direction, this can exacerbate the challenges IT already faces.
It’s all in the shadows
CIOs cannot afford to ignore the fact that the need for rigorous management is greater, not less, in the cloud. Part of the solution to this is making better use of the IT Service Management (ITSM) tools they already employ to manage in-house IT. While in-house IT services are, on average, managed by a combination of six established ITSM processes, cloud-based services are, only subject to four. Further, only a fifth of CIOs report that ITSM processes have been applied to all cloud services.
Another key concern is that Shadow IT is continuing to affect CIOs’ control over enterprise IT; employees are still signing up to cloud storage services, CRM applications or collaboration applications. For IT, it can be hard to know how many cloud applications are being used in the business without their knowledge. Businesses need to take back control of the cloud, but this can be a challenge for many.
Taking back control
Taking back control is difficult, with CIOs increasingly handing the reigns to third-party cloud providers; leaving themselves more open to the ‘blame game’ if one of those services fails. If a user experiences problems with cloud applications, such as Salesforce or Dropbox, often the user goes straight to the cloud provider’s support team. While this may take the support burden off corporate IT, it’s not clear who is responsible for a fault, particularly if SLAs are not correctly monitored.
There is one encouraging sign in terms of improving control over the cloud; when it comes to using ITSM tools to orchestrate cloud platforms we have seen a positive increase – however at best this figure is still less than 50% of businesses. CIOs need to ensure that IT is using the tools they have for orchestration – and those that don’t have any such systems should be looking to put them in place. Orchestration is an essential part of the cloud management process, and CIOs should focus on putting this technology to work.
Another benefit would be the wider use of service catalogue functionality contained within ITSM solutions, which will help to control cloud sprawl. By using a service catalogue, CIOs can help minimise shadow IT; if it’s easy and hassle-free for users to choose and use officially-sanctioned cloud services, then they are less likely to follow a DIY route.
Effective management
Cloud can reduce costs, improve productivity and introduce a ‘consumerised’ service experience which businesses cannot afford to ignore. However, this does not mean that CIOs can blindly trust that public cloud services will work flawlessly and be delivered perfectly, all the time. CIOs need to explore how they can make better use of existing ITSM systems as a key part of the solution. ITSM technology should be used as effectively for managing public cloud technology as it is for in-house systems and private cloud.
By offering employees functionality such as self-service access to approved cloud services, or by using ITSM to manage assets and service providers, CIOs can start to bring the cloud under control and reduce the risks of shadow IT whilst keeping employees happy and enjoying cost benefits.
Mike Puglia, chief product officer, Kaseya discusses why differentiation is critically important.
As IT becomes more fast-moving, complex and mission critical, outsourcing to a managed service provider (MSP) begins to look an increasingly sensible move. Individual IT departments just don’t have the skills or the capacity to cope with the changes happening or the planning needed to ensure scalability and to cope with the peaks in IT traffic. Because they are only involved in creating solutions for their own individual company, it’s difficult for them to develop any idea of best practice.
Consequently, managed services is already a huge and competitive market – and it is expected to grow further over the next five years. Research suggests the global market will expand from $152 billion in 2017 to $257.84 billion by 2022, at a compound annual growth rate (CAGR) of 11.1%.
Fuelled by the huge advances in technology of recent years, including the cloud, big data, mobile and the subsequent need for increased security and data protection, MSPs have been described as “the driving force for IT industry growth.”
The expansion of the cloud alone has been enough to propel demand. IDC predicts that when the 2017 figures are known, public cloud services spending will reach $128 billion, an increase by around 25.4% over the previous year. Migration from private to public cloud and the rising popularity of the hybrid cloud have also helped boost MSP’s revenue from companies wanting to invest in their services but ultimately save money.
The list of high-profile global companies hit by malware attacks has also brought renewed interest from companies anxious about security. Of course reviewing security and installing protective software also entails the entire IT infrastructure – and it’s therefore more logical to employ an MSP with a broad remit than just a security specialist.
The changing role of IT within business – from being merely a business facilitator to becoming central to business strategy – has brought a change of focus for MSPs too. The need for digital transformation, a mobile workforce and the rise of data volumes have all helped beat the path to the MSP’s door. No longer are MSPs mainly product-centric, but now act as trusted advisors and consultants that focus on providing a total solution rather than a standalone product.
However, a market with such huge potential is something of a double-edged sword. More and more companies swarm round the promise of sustainable profits like bees around a honeypot. Systems integrators and other IT professionals are now being joined by independent software providers, keen to expand their portfolios to include services. Even telcos, keen to translate their scale and knowledge beyond voice and data services are also entering the market
At the same time, the growing maturity of software solutions helped by increasing acquisitions and consolidation of brands has narrowed the product choice. This has made it more difficult for MSPs to differentiate themselves by the products they offer or their related expertise.
As a result, every business is looking to develop a compelling unique selling proposition and wants to stand out in a crowded sector. Also, research by Kaseya has shown that within the MSP market, growth is not uniform, with some businesses raising their revenues by considerably greater percentages than their rivals.
So what are the successful MSPs doing that the others are not? The same research shows that the high flyers have often moved away from offering just the bread and butter such as back-up and disaster recovery, for example. High growth MSPs frequently offer a choice of different backs ups such as cloud-to-cloud, onsite-to-onsite, cloud-to onsite and onsite-to-cloud.
However, it’s not just in this area that they have diversified. They offer more options around security too. When MSPs were asked about the most important challenge for their clients last year (2017), the majority said ‘meeting security risks.’
The same is true when it comes to network and infrastructure monitoring. High-growth MSPs are scoring plenty of hits with network operating centres (NOCs), another emerging service category. It’s interesting that 47% of high-growth MSPs report that they offer NOC services around the clock, compared with just 27% with lower growth.
In a nutshell, it seems that higher growth MSPs get it right because they give the market what it wants, including 24/7 service. However, devising a strategy around this can mean spending an immense amount of time gathering business intelligence to provide direction.
Staying ahead
There’s little doubt the MSP sector is active, thriving and in some cases, highly profitable. This in turn is attracting innovation around the MSP role, with the emergence of tools to help these businesses become even more supportive to their customers.
The latest of these, a true industry first, is Kaseya’s MSP Insights, a business intelligence and benchmarking tool that can cut through all the manual information gathering. Built on proprietary technical and business data, the online portal allows MSPs to quickly and easily benchmark their business metrics against other MSPs in their region, analyse new and emerging MSP service offerings and evaluate bundling and pricing strategies.
The portal enables users to keep an eye on emerging trends. When a business is successful and busy looking after clients, it challenging to dedicate time to looking at the future and planning. It then runs the risk of getting left behind in its thinking and unable to recommend new ideas to clients.
Competitive analysis, though extremely important, can be costly, time intensive and often involves taking a senior employee ‘off charge.’ Using this portal, MSPs can now quickly benchmark their business services and strategies against the wider MSP community. Users can easily access the critical information they need and focus on what is most important – running and expanding their own business.
So far the tool has been well-received. For example, Mick Shah, senior vice-president of MSP Dataprise told us: “The ability to look under the covers at what our competitors are offering customers speaks volumes to our sales team. It allows us to stay in front of market trends and see what new technologies our customers require. This empowers us to continually update the types of technology and services we offer, which directly leads to new business and overall company growth.”
It’s clear that the difference between a fast-growing MSP business and the not so successful is that the former continually has its ear to the ground, looking for ways to hone, improve and expand its services. The challenge here is that it can take considerable time and effort to gather this information, especially when it comes to a competitor’s data.
Using MSP Insights, this data can be used, discussed and analysed in an instant.
A recent survey shows 64 percent of organisations have deployed some level of IoT technology, and another 20 percent plan to do so within the next 12 months. This means that by the end of 2018, five out of six organisations will be using at least a minimal level of IoT technology within their businesses.
By Ian Kilpatrick, EVP Cyber Security Nuvias Group.
The influx of connected devices onto a company’s network literally creates tens, or even hundreds of new unsecured entry points for cybercriminals. But many companies are turning a blind eye to this, swayed by the potential benefits that IoT can bring their business.
So here are some facts for consideration, before taking the leap into IoT, including a look at the short and medium term consequences of deploying a wave of unsecured devices to your network.
1. IoT - a cybercriminal’s dream
Any device or sensor with an IP address connected to a corporate network is an entry point for hackers and other cybercriminals - the equivalent of an organisation leaving its front door wide open for thieves.
Managing endpoints within an organisation is already a challenge; a 2017 survey showed 63 percent of IT service providers have seen a 50 percent increase in the number of endpoints they’re managing, compared to the previous year.
IoT will usher in a raft of new network-connected devices that threaten to overwhelm the IT department charged with securing them – a thankless task considering the lack of basic safeguards in place on the devices.
Of particular concern is that many IoT devices are not designed to be secured or updated after deployment. This means that any vulnerabilities discovered post- deployment cannot be protected against in the device; and corrupted devices cannot be cleansed. In an environment with hundreds or thousands of insecure or corrupted devices, this can raise huge operational and security challenges.
2. IT or OT
IT professionals are more used to securing PCs, laptops and other devices, but they will now be expected to become experts in smart lighting, heating and air conditioning systems, not to mention security cameras and integrated facilities management systems.
A lack of experience in managing this Operating Technology (OT), rather than IT, should be a cause of concern. It is seen as operational rather than strategic, and deployment and management is often shifted well away from Board awareness and oversight.
And that’s barely touching the visible surface. Machine-to-machine (M2M) technology is already transforming and will continue to transform businesses.
Many AI applications depend on IoT - for example transportation and logistics are being changed by it. These developments can and will impact most organisations.
Nevertheless, the majority of organisations are deploying IoT technology with not only a lack of strategic direction, but with minimal regard to the risk profile or the tactical requirements needed to secure them against unforeseen consequences. These include not just security requirements, but also business continuity challenges.
3. Increase in DDoS attacks
DDoS (Distributed Denial of Service) attacks are on the rise. In the UK alone, 41 percent of organisations say they have experienced a DDoS attack.
IoT devices are a perfect vehicle for criminals to use to access a company’s network. In fact, 2016’s high-profile Mirai attack used IoT devices to mount wide-scale DDoS attacks that disrupted internet service for more than 900,000 Deutsche Telekom customers in Germany, and infected almost 2,400 TalkTalk routers in the UK.
4…and ransomware attacks
Elsewhere, there has been an almost 2000 percent jump in ransomware detections since 2015. Ransomware became a public talking point in 2017 when WannaCry targeted more than 200,000 computers across 150 countries, with damages ranging from hundreds of millions to billions of dollars.
While most ransomware attacks currently infiltrate an organisation via email, IoT presents a new delivery system for both mass and targeted attacks. Consider the potentially life-threatening impact of ransomware on smart devices within critical applications - the ability of criminals to shut down critical business and logistics systems has already been repeatedly demonstrated. So perhaps it is unsurprising that a 2017 survey found that almost half of small businesses questioned would pay a ransom on IoT devices to reclaim their data.
5. Increasing intensity and sophistication of attacks
The sophistication of attacks targeting organisations is accelerating at an unprecedented rate, with criminals leveraging the significantly expanded and expanding attack surface created by IoT for new disruptive opportunities.
According to Fortinet’s latest Quarterly Threat Landscape report, three of the top twenty attacks identified in Q4 2017 were IoT botnets. But it says unlike previous attacks, which focused on exploiting a single vulnerability, new IoT botnets such as Reaper and Hajime target multiple vulnerabilities simultaneously, which is much harder to combat.
Wi-Fi cameras were targeted by criminals, with more than four times the number of exploit attempts detected over Q3 2017. The challenge is that none of these detections is associated with a known security threat, which Fortinet rightly describes as “one of the more troubling aspects of the myriad of vulnerable devices that make up the IoT.”
6. The effects of an attack
The aftermath of a cyberattack can be devastating for any company, leading to huge financial losses, compounded by regulatory fines for data breaches, and plummeting market share or job losses. At best, a company could suffer irreparable reputational damage and loss of customer loyalty.
On top of that, IoT devices have the potential to create organisational and infrastructure risks, and even pose a threat to human life, if they are attacked. We have already seen the impact of nation-state attack tools being used as nation state weapons, then getting out and being used in commercial criminal activity. While the core focus is on defending critical infrastructure, and that is still far behind the curve, weak business infrastructure is a much softer target.
7. Profit over security
It’s crazy to think that devices with the potential to enable so much damage to homes, businesses and even entire cities often lack basic security design, implementation and testing. In the main this is because device manufacturers are pushing through their products to get them to market as quickly as possible, to cash in on the current buzz around IoT.
Though, F-Secure in its Pinning Down the IoT report says other factors include the small size of the chips being used for cost-saving reasons, and that devices are set to the manufacturer’s default password settings, which are set to four zeros or 1234, which are well known to criminals.
Lawrence Munro, vice president SpiderLabs at Trustwave agrees IoT manufacturers are sidestepping security fundamentals as they rush to bring products to market: “We are seeing lack of familiarity with secure coding concepts resulting in vulnerabilities, some of them a decade old, incorporated into final designs,” he notes.
“If consumers aren’t demanding security, manufacturers will never prioritise it,” says the F-Secure report. “But given the extraordinary dependency society is likely to develop on billions of IoT devices, governments may have to step in to demand security requirements.”
8. Can you see the problem?
Another huge problem is that once a network in attacked, it’s much easier for subsequent attacks to occur.
Yet, recent data shows just half of IT decision makers feel confident they have full visibility and control of all devices with network access. The same percentage believe they have full visibility of the access level of all third parties, who frequently have access to networks, and 54 percent say they have full visibility and control of all employees.
This is a worrying lack of confidence in network visibility and should be a concern for organisations. Yet, the same figures show basic security measures like network segmentation are only being planned by 24 percent of businesses in 2018. Without network segmentation, malware entering a network will often be left to spread.
Elsewhere, less than half of organisations have formal patching policies and procedures in place, and only about a third patch their IoT devices within 24 hours after a fix becomes available.
But because updating IoT devices by nature is more challenging, many remain vulnerable even after patches are issued, so organisations need to properly document and test each IoT device on their network.
9. Turning a blind eye
Both consumers and manufacturers seem to be burying their heads in the sand when it comes to IoT security.
Despite security concerns often cited as the number one barrier to greater IoT adoption, Trustwave research shows sixty-one percent of firms who have deployed some level of IoT technology have had to deal with a security incident related to IoT, and 55 percent believe an attack will occur sometime during the next two years. Only 28 percent of organisations surveyed consider that their IoT security strategy is ‘very important’ when compared to other cybersecurity priorities.
More worrying is that more than a third believe that IoT security is only ‘somewhat’ or ‘not’ important!
Some more troublesome stats - fewer than half of organisations consistently assess the IoT security risk posed by third-party partners, another 34 percent do so only periodically, and 19 percent don’t perform third-party IoT risk assessment at all.
10. Efforts to standardise
These security concerns can obviously paint the adoption of IoT in a negative light. But is there anything being done to mitigate these risks?
In the UK, the government’s five-year National Cyber Security Programme (NCSP) is looking to work with the IT industry to build security into IoT devices through its ‘Secure by Default’ initiative.
The group published a review earlier this month that addresses key risks related to consumer IoT and proposes a draft Code of Practice for IoT manufacturers and developers.
Recommendations include: ensuring that IoT devices do not contain default passwords; defining and implementing vulnerability disclosure policy; ensuring software for devices is regularly updated; and a proposal for a voluntary labelling scheme.
While there seems to be some light at the end of the tunnel, it may not be enough. Regulators won’t force device manufacturers to introduce the necessary security regulations and practices before thousands of businesses fall victim to attacks. Turning a blind eye to the IoT security risks could leave your organisation permanently paralysed.
With data centres increasingly vital to modern business, the cost of a power failure can be enormous. Protecting a company’s investment requires the latest technology.
By François Danet, Li-ion Business Development Manager –
Data Center, Saft.
For 75,000 passengers stranded around the world by a British Airways IT problem in May 2017, the cause of the disruption probably didn't matter much. They just wanted to reach their destination. But for the airline, it was vitally important to discover what lay behind the outage that resulted in the cancellation of 726 flights, a loss of $108m and untold damage to the brand’s reputation with customers and investors. The problem was traced to one of the airline’s data centers.
In August 2016, US carrier Delta Airlines had a similar problem. Its power outage, which took three days to resolve, led to the cancellation of 2,300 flights and cost $150m. In September 2016, an outage at the Global Switch 2 (GS2) data center in London lasted for less than a second but still took one customer offline – GS2 didn’t say what industry they were in – for two days.
Whether for processing customer information, running e-commerce sites or providing employees with access to cloud computing, data centers are vital to modern businesses. Almost every function of the digital world is dependent on them.
The cost of downtime is estimated at $9,000 per minute and, as the GS2 example demonstrates, even a blink-and-you’ll-miss-it power problem can cause significant downtime. With the data-center market becoming ever more important to modern business and society, these risks will only be more acute in future.
Reliable backups are key
Data centers are power-hungry creatures. It’s estimated that worldwide they consume around 500 terawatt hours (TWh) per year – slightly more than the UK’s annual energy consumption and around two percent of worldwide greenhouse gas emissions, roughly equivalent to the emissions of the airline industry. And most of that energy doesn’t go on powering the servers; instead it’s needed for cooling – both of the servers and the power supply itself.
With all this in mind, it’s vital to have a reliable backup system in place to reduce the risk of outages. Power problems will happen, but a well-designed UPS will ensure seamless operation by drawing power from a battery until the backup generator is operating.
Power-supply technology and systems design may have moved on, but many data centers have not. Garcerán Rojas, chairman at PQC, a Spanish data-center engineering firm, argues that power outages in data centers are often caused by outdated design of backup systems.
One example of changing technology can be found in the kind of batteries used in the UPS. Most of the industry still uses lead-acid batteries, which are big and heavy and need a lot of cooling. Accommodating them affects everything from cooling costs to the amount of space the data center needs – and even whether the floor should be reinforced to support the weight.
However, the latest lithium-ion (Li-ion) batteries offer greater power density, so they can be up to three-times more compact and six-times lighter. They also last longer than lead acid – up to 20 years, compared with four to six years. And because they work in higher temperatures, they require less cooling. That lowers costs and frees up more space in the data center for servers.
Li-ion is more flexible, too. The smallest lead-acid option provides five minutes of backup power until the generator takes over. For many applications that’s more than necessary and thus a waste of money. Li-ion can cover smaller gaps.
Batteries are getting smart
Another big benefit is that Li-ion batteries can be ‘smart’. It’s impossible to know when a lead-acid battery will fail. A data center might find out a failure has occurred only when backup power is needed, resulting in the kinds of problems that befell Delta and BA. Operators either have to accept that risk or invest in further redundancy.
Li-ion batteries, in contrast, are self-monitoring, so data-center operators are constantly aware of the battery’s health and don’t need to waste money replacing it too early. Saft’s Flex’ion Li-ion batteries are self-powered, meaning they can monitor their condition even without a power supply.
Of course, the battery is just one part of the UPS and the UPS is just one part of maintaining a reliable data center. However, as businesses rely on data centers more and more to support their digital operations, it’s vital that the underlying power system can be counted on in an emergency. The financial and reputational cost of failure is simply too high.