With the GDPR just over the horizon, and the TSB IT crash fresh in our minds, maybe it’s time for a refresh (hopefully not the sort of refresh that goes horribly wrong and causes chaos throughout an organisation!).
Anyhow, over the past few weeks, in my role as editor of various trade publications, I’ve received an intriguing selection of what I’ll describe as GDPR-related emails, all of them requesting permission from me to allow the respective senders to continue to send me information after the 25th of May. I’ll hold my hand up at this point and confess to not having a complete understanding of the many nuances of the impending legislation, but I don’t recall anywhere reading about organisations requiring permission to contact me with something they have every reason to believe that will be of interest to me. Think about it logically - the whole way of doing business would all but grind to a halt if the GDPR required that only individuals who have permitted themselves to be contacted by any particular organisation can be contacted by that organisation…So, apparently, in order to deal with me, an organisation has to request permission to contact me (ignoring the stupidity that they, of course, have to contact me without my permission to ask if they can contact me with my permission!!!) and once I have granted it, they can then communicate with me.
I think that the GDPR is designed to ensure that, if someone contacts me without my permission, which is fine, and I ask them to leave me alone, if they contact me again (having failed to remove me from one or more database), then there is a problem. Quite whether the threat of GDPR will actually prevent the irritating and often rude individuals who mail (yes, still mail), email or phone me persistently trying to sell me something remains to be seen For now, at least, it seems that the GDPR has plenty of folks mightily confused as to what they can or can’t do in terms of contacting individuals.
Once this confusion has been overcome, I imagine that the major impact of the GDPR will be twofold. Firstly, the requirement to report a data breach within 72 hours could well mean that such events feature in the headlines on an almost daily basis (although there is some kind of a fabulous get out clause that requires revelation of the data breach – unless it’s deemed to have no impact, which is a bit like the offside rule in football.
Secondly, individuals do have the right to be forgotten. How this dovetails with governments’ increasing need and demand to keep every scrap of information ever produced by and about every single citizen, just in case they possess a security threat, is not immediately obvious. Example, I’m a terrorist, I buy a bomb from bombmakers-r-us, and then request that this organisation deletes every detail about me that they hold on file. Er, not very helpful for the law enforcers. Still, if I can be removed from the lists which I never asked to go on, but ensure that all manner of catalogues continue to arrive through the letterbox and in my email inbox, that’s a reason to celebrate.
Anyhow, I guess that all of the above ramblings and musings are designed to suggest that GDPR legislation fallout could become a very entertaining spectator sport over the next few months, as all manner of individuals and organisations (no doubt myself included) manage to completely misunderstand what is or isn’t required, and the legal profession gets (even) fatter as a result.
The worldwide public cloud services market is projected to grow 21.4 percent in 2018 to total $186.4 billion, up from $153.5 billion in 2017, according to Gartner, Inc.
The fastest-growing segment of the market is cloud system infrastructure services (infrastructure as a service or IaaS), which is forecast to grow 35.9 percent in 2018 to reach $40.8 billion (see Table 1).
Gartner expects the top 10 providers to account for nearly 70 percent of the IaaS market by 2021, up from 50 percent in 2016.
"The increasing dominance of the hyperscale IaaS providers creates both enormous opportunities and challenges for end users and other market participants," said Sid Nag, research director at Gartner.
"While it enables efficiencies and cost benefits, organizations need to be cautious about IaaS providers potentially gaining unchecked influence over customers and the market. In response to multicloud adoption trends, organizations will increasingly demand a simpler way to move workloads, applications and data across cloud providers' IaaS offerings without penalties."
Table 1. Worldwide Public Cloud Service Revenue Forecast (Billions of U.S. Dollars)
Cloud Business Process Services (BPaaS)
Cloud Application Infrastructure Services (PaaS)
Cloud Application Services (SaaS)
Cloud Management and Security Services
Cloud System Infrastructure Services (IaaS)
BPaaS = business process as a service; IaaS = infrastructure as a service; PaaS = platform as a service; SaaS = software as a service
Note: Totals may not add up due to rounding.
Source: Gartner (April 2018)
Software as a service (SaaS) remains the largest segment of the cloud market, with revenue expected to grow 22.2 percent to reach $73.6 billion in 2018. Gartner expects SaaS to reach 45 percent of total application software spending by 2021.
"In many areas, SaaS has become the preferred delivery model," said Mr. Nag. "Now SaaS users are increasingly demanding more purpose-built offerings engineered to deliver specific business outcomes."
Within the platform as a service (PaaS) category, the fastest-growing segment is database platform as a service (dbPaaS), expected to reach almost $10 billion by 2021. Hyperscale cloud providers are increasing the range of services they offer to include dbPaaS.
"Although these large vendors have different strengths, and customers generally feel comfortable that they will be able to meet their current and future needs, other dbPaaS offerings may be good choices for organizations looking to avoid lock-in," said Mr. Nag.
Although public cloud revenue is growing more strongly than initially forecast, Gartner still expects growth rates to stabilize from 2018 onward, reflecting the increasingly mainstream status and maturity that public cloud services will gain within a wider IT spending mix.
This forecast excludes cloud advertising, which was removed from Gartner's public cloud service forecast segments in 2017.
Worldwide IT spending is projected to total $3.7 trillion in 2018, an increase of 6.2 percent from 2017, according to the latest forecast by Gartner, Inc.
"Although global IT spending is forecast to grow 6.2 percent this year, the declining U.S. dollar has caused currency tailwinds, which are the main reason for this strong growth," said John-David Lovelock, research vice president at Gartner. "This is the highest annual growth rate that Gartner has forecast since 2007 and would be a sign of a new cycle of IT growth. However, spending on IT around the world is growing at expected levels and is in line with expected global economic growth. Through 2018 and 2019, the U.S. dollar is expected to trend stronger while enduring tremendous volatility due to the uncertain political environment, the North American Free Trade Agreement renegotiation and the potential for trade wars."
Enterprise software spending is forecast to experience the highest growth in 2018 with an 11.1 percent increase (see Table 1). Barring unexpected disruption, the software industry is expected to continue capitalizing on the evolution of digital business. Application software spending is expected to continue to rise through 2019, and infrastructure software will also continue to grow, bolstered by modernization initiatives.
Table 1. Worldwide IT Spending Forecast (Billions of U.S. Dollars)
2019 Growth (%)
Data Center Systems
Source: Gartner (April 2018)
Even with a strong end to 2017, worldwide spending on data center systems is forecast to grow 3.7 percent in 2018, down from 6.3 percent growth in 2017. The longer-term outlook continues to have challenges, particularly for the storage segment. The strength at the end of 2017 was primarily driven by the component shortage for memory components, and prices have increased at a greater rate than previously expected. Whereas previously, component shortages were expected to ease into 2018, the shortages are now expected to continue throughout the year with the supply not expected to ease until the end of the year.
Worldwide spending for devices — PCs, tablets and mobile phones — is forecast to grow in 2018, reaching $706 billion, an increase of 6.6 percent from 2017. "The device market continues to see dual dynamics. Some users are holding back from buying, and those that are buying are doing so, on average, at higher price points," said Mr. Lovelock. "As a result, end-user spending will increase faster than units through 2022. However, total end-user spending and unit shipments are expected to be lower compared with previous forecasts, as demand for ultramobile premium devices, ultramobile utility devices and basic phones is expected to be slow."
According to a new forecast from the International Data Corporation (IDC) Worldwide Quarterly Cloud IT Infrastructure Tracker, total spending on IT infrastructure products (server, enterprise storage, and Ethernet switches) for deployment in cloud environments is expected to total $52.3 billion in 2018 with year-over-year growth of 10.9%. Public cloud datacenters will account for a majority of this spending, 65.9%, growing at the fastest annual rate of 11.3%. Off-premises private cloud environments will represent 13.0% of cloud IT infrastructure spending, growing at 12.0% year over year. On-premises private clouds will account for 61.7% of spending on private cloud IT infrastructure and will grow 9.1% year over year in 2018.
Worldwide spending on traditional, non-cloud, IT infrastructure is expected to decline by 2.0% in 2018 but nevertheless will account for the majority, 54.7%, of total end user spending on IT infrastructure products across the three product segments, down from 57.8% in 2017. This represents a faster share loss than in the previous three years. The growing share of cloud environments in overall spending on IT infrastructure is common across all regions.
In cloud IT environments, spending in all technology segments, except for storage platforms, is forecast to grow at double digit rates in 2018. Ethernet switches and compute platforms will be the fastest growing at 20.9% and 12.4%, respectively, while spending on storage platforms will grow 6.0%. Investments in all three technologies will increase across all cloud deployment models – public cloud, private cloud off-premises, and private cloud on-premises.
Long-term, IDC expects spending on off-premises cloud IT infrastructure will grow at a five-year compound annual growth rate (CAGR) of 10.8%, reaching $55.7 billion in 2022. Public cloud datacenters will account for 83.6% of this amount growing at a 10.6% CAGR while spending on off-premises private cloud infrastructure will increase at a CAGR of 11.4%. Combined with on-premises private cloud, overall spending on cloud IT infrastructure will grow at an 10.9% CAGR and by 2022 will surpass spending on non-cloud IT infrastructure. Spending on on-premises private cloud IT infrastructure will grow at a 11.5% CAGR, while spending on non-cloud IT (on-premises and off-premises combined) will decline at a 2.7% CAGR during the same period.
"Growing expansion of digital transformation initiatives enables further adoption of cloud-based solutions around the globe. This will result in a continuous shift in the profile of IT infrastructure buyers. SaaS, PaaS, and IaaS offerings address a broad range of business and IT needs of enterprises from 'lift-and-shift' to emerging workloads. As a result, service providers' demand for IT Infrastructure for delivering these offerings is growing steadily making them as a group a major buyer of compute, storage, and networking products," said Natalya Yezhkova, research director, Enterprise Storage.
IT spending from line-of-business (LOB) will reach $191 billion in Western Europe in 2018, a 5.7% increase over 2017, according to International Data Corporation's (IDC) latest update of the Worldwide Semiannual IT Spending Guide: Line of Business. IDC forecasts that Western European business-funded spending will grow at a 5.5% compound annual growth rate (CAGR) between 2016 and 2021. In comparison, IT-funded technology spending will grow at a slower pace, with a 2.1% five-year CAGR — meaning that business managers will invest more aggressively over the years. While IT departments will remain the biggest funding source in 2018, funding 52% of technology investments, LOBs will catch up by 2021 and will fund 50% of the overall spending in Western Europe ($222 billion).
In 2018, IT departments will continue to fund most of the software, hardware, and IT services, while LOBs will be the primary funding source for business services. Business-funded spending on software will grow fast, driven by stronger investments in cloud software, but IT organizations will remain the top funding source in 2018 in Western Europe. Looking at hardware, IT spending on network equipment, IaaS, and server/storage will continue to be funded more by IT departments. On the other hand, there are hardware solutions that business managers control more, such as mobile phones, PCs, tablets, and peripherals.
"Business managers prefer to choose the IT solutions they need daily, to share content or access information, and the devices they can use in mobility. IT departments do not necessarily understand what LOBs need, consequently shadow IT is a way through which tech-savvy employees adopt solutions independently and fill the gaps that IT managers are not able to address," said Andrea Minonne, research analyst, IDC Customer Insights and Analysis.
IDC expects business-funded IT spending in 2018 to be larger than IT-funded spending in discrete manufacturing, healthcare, security and investments services, and telecommunications. By 2021, banking, insurance, and utilities will also see LOB investments move ahead of IT purchases. The industries with the fastest growth in LOB spending are professional services (6.2% CAGR), retail (6.0%), and process manufacturing (6.0%). LOB IT spending is forecast to grow faster than that of the IT organization in all 16 industries covered in the spending guide.
According to the International Data Corporation (IDC) Worldwide Quarterly Converged Systems Tracker, worldwide converged systems market revenue increased 9.1% year over year to $3.6 billion during the fourth quarter of 2017 (4Q17). Full-year sales surpassed $12.5 billion, representing a 9.4% increase over the previous year and the first time the market surpassed $12 billion in a calendar year.
"The number of organizations deploying converged systems continued to expand through 2017," said Eric Sheppard, research vice president, Enterprise Servers and Storage. "This drove the total market value past $12.5 billion for the year. While not all market segments increased during the year, those that did grow were able to provide considerable benefits related to the most core infrastructure challenges facing today's data centers."
Converged Systems Segments
IDC's converged systems market view offers three segments: certified reference systems & integrated infrastructure, integrated platforms, and hyperconverged systems. Certified reference systems & integrated infrastructure are pre-integrated, vendor-certified systems containing server hardware, disk storage systems, networking equipment, and basic element/systems management software. Integrated platforms are integrated systems that are sold with additional pre-integrated packaged software and customized system engineering optimized to enable such functions as application development software, databases, testing, and integration tools. Hyperconverged systems collapse core storage and compute functionality into a single, highly virtualized solution. A key characteristic of hyperconverged systems that differentiate these solutions from other integrated systems is their scale-out architecture and their ability to provide all compute and storage functions through the same x86 server-based resources. Market values for all three segments includes hardware and software but excludes services and support.
The certified reference systems & integrated infrastructure market generated $1.7 billion in revenue during the fourth quarter, which represents a 3.4% year-over-year decline and 47.1% of the total converged systems market value. Dell Inc. was the largest supplier in this market segment with $735.0 million in sales and a 42.9% share. Cisco/NetApp generated $565.6 million in sales, which was the second largest share at 33.0%. HPE generated $289.3 million in sales and captured 16.9% market share.
Top 3 Companies, Worldwide Certified Reference Systems & Integrated Infrastructure, Q4 2017 (Revenues are in $Millions)
4Q17 Market Share
4Q16 Market Share
4Q17/4Q16 Revenue Growth
1. Dell Inc.*
Source: IDC Worldwide Quarterly Converged Systems Tracker, April 3, 2018
* Note: Dell Inc. represents the combined revenues for Dell and EMC sales for all quarters shown.
Revenue from hyperconverged systems sales grew 69.4% year over year to $1.25 billion during the fourth quarter of 2017. This amounted to 34.3% of the total converged systems market. Full-year sales of hyperconverged systems surpassed $3.7 billion in 2017, up 64.3% from 2016.
IDC offers two ways to rank technology suppliers within the hyperconverged systems market: by the brand of the hyperconverged solution or by the owner of the software providing the core hyperconverged capabilities. Rankings based on a branded view of the market can be found in the second table of this press release and rankings based on the owner of the hyperconverged software can be found in the third table within this press release. Both tables include all the same software and hardware, summing to the same market size.
In the branded solution view, Dell Inc. was the largest supplier in this market segment with $346.8 million in revenue and a 27.8% share. Nutanix generated $243.0 million in revenue with the second largest share of 19.5%. HPE and Cisco were statistically tied** for the quarter, with $61.6 million and $56.3 million in revenue and 4.9% and 4.5% market share, respectively.
Top 3 Companies, Worldwide Hyperconverged Systems, Based on Brand of HCI Solution, Q4 2017 (Revenues are in $Millions)
4Q17 Market Share
4Q16 Market Share
4Q17/4Q16 Revenue Growth
1. Dell Inc.*
Source: IDC Worldwide Quarterly Converged Systems Tracker, April 3, 2018
* – Dell Inc. represents the combined revenues for Dell and EMC sales for all quarters shown.
** – IDC declares a statistical tie in the worldwide converged systems market when there is a difference of one percent or less in the revenue share of two or more vendors.
On the basis of HCI software, systems running VMware's hyperconverged software represented $405.1 million in fourth quarter vendor revenue, or 32.4% of the total market segment. Systems running Nutanix's hyperconverged software represented $368.4 million in fourth quarter vendor revenue, or 29.5% of the total market segment. Both values represent all software and hardware, regardless of how it was ultimately branded.
Top 4 Companies, Worldwide Hyperconverged Systems, Based on Owner of HCI Software, Q4 2017 (Revenues are in $Millions)
4Q17 Market Share
4Q16 Market Share
4Q17/4Q16 Revenue Growth
3. Dell Inc.*
Source: IDC Worldwide Quarterly Converged Systems Tracker, April 3, 2018
Survey data shows transformed companies are 22x more likely to get new products and services to market ahead of the competition.
Dell EMC has published the results of new research conducted by Enterprise Strategy Group (ESG) into the benefits of IT Transformation which validates that IT Transformation can result in bottom-line benefits that drive business differentiation, innovation and growth.
Today’s business landscape is rife with disruption, much of it driven by organisations using technology in new or innovative ways. In order to survive and thrive in today’s digital world, businesses are implementing new technologies, processes and skillsets to best address changing customer needs. A fundamental first step to this change is transforming IT, to help organisations bring products to market faster, remain competitive and drive innovation. According to ESG’s 2018 IT Transformation Maturity Study [ii] commissioned by Dell EMC and Intel:
81 percent of survey respondents agree if they do not embrace IT Transformation, their organisation will no longer be competitive in their markets, up from 71 percent in 2017. 88 percent of respondents say their organisation is under pressure to deliver new products and services at an increasing rate. Transformed organisations are 22x as likely to be ahead of the competition when bringing new products and services to market.Transformed organisations are 2.5x more likely to believe they are in a strong position to compete and succeed in their markets over the next few years. Transformed companies are 18x more likely to make better and faster data-driven decisions than their competition and are 2x as likely to exceed their revenue goals.
“Data is the new competitive edge – yet it’s become highly distributed across the edge, the core data center and cloud. Organisations realise they have to move quickly to turn that data into business intelligence – requiring an end-to-end IT infrastructure that can manage, analyse, store and protect data everywhere it lives,” said Jeff Clarke, Vice Chairman, Products and Operations, Dell Technologies. “We’re in the business of better business outcomes, giving our customers the ability make that end-to-end strategy a reality, driving disruptive innovation without the fear of being disrupted themselves.”
The ESG 2018 IT Transformation Maturity Study
The ESG 2018 IT Transformation Maturity Study follows the seminal study commissioned by Dell EMC, the ESG 2017 IT Transformation Maturity Study, and was designed to provide insight into the state of IT Transformation, the business benefits fully transformed companies experience, and the role critical technologies have in an IT Transformation. ESG employed a research-based, data-driven maturity model to identify different stages of IT Transformation progress and determine the degree to which global organisations have achieved those different stages, based on their responses to questions about their organisations’ adoption of modernised data center technologies, automated IT processes and transformed organisational dynamics.
“Companies today need to be agile to stay competitive and drive growth, and IT Transformation can be a major enabler of that,” said John McKnight, Vice President of Research, Enterprise Strategy Group. “It’s clear that IT Transformation is increasingly resonating with companies and that senior executives recognise how IT Transformation is pivotal to overall business strategy and competitiveness. While achieving transformation can be a major endeavour, our research shows ‘Transformed’ companies experience real business results, including being more likely to be ahead of the competition in bringing new products and services to market, making better, faster data-driven decisions than their competition, and exceeding their revenue goals.”
This year’s 4,000 participating organisations were segmented into the same IT Transformation maturity stages:
Stage 1 – Legacy (6 percent): Falls short on many – if not all – of the dimensions of IT Transformation in the ESG study. Stage 2 – Emerging (45 percent): Showing progress in IT Transformation but having minimal deployment of modern data center technologies. Stage 3 – Evolving (43 percent): Showing commitment to IT Transformation and having a moderate deployment of modern data center technologies and IT delivery methods. Stage 4 – Transformed (6 percent): Furthest along in IT Transformation initiatives.
This year’s findings show organisations are progressing in IT maturity and generally believe transformation is a strategic imperative.
96 percent of respondents said they have Digital Transformation initiatives underway – either at the planning stage, at the beginning of implementation, in process, or mature. Respondents whose organisations have achieved Transformed status are 16x more likely to have mature Digital Transformation projects underway versus Legacy companies (66 percent compared with 4 percent). Transformed organisations were more than 2x as likely to have exceeded their revenue targets in the past year compared with Legacy organisations (94 percent compared to 44 percent). 84 percent of respondents with mature Digital Transformation initiatives underway said they were in a strong or very strong position to compete and succeed.
IT Transformation maturity can accelerate innovation, drive growth, increase IT efficiency and reduce cost. More specifically:
Transformed organisations are able to reallocate 17 percent more of their IT budget toward innovation. They complete 3x more IT projects ahead of schedule and are 10x more likely to deploy the majority of their applications ahead of schedule. Transformed organisations also report they complete 14 percent more IT projects under budget and spend 31 percent less on business-critical applications.
Making IT Transformation and Digital Transformation Real
Organisations like Texas-based Rio Grande Pacific understand IT Transformation benefits first-hand. The company has branched from a railroad holding company – moving and physically handling railcars – into a provider of technology services for other short line railroads and commuter operations. Rio Grande Pacific pursued IT Transformation to support its aggressive growth. By modernising its data center, the company has increased speed of services tenfold, experienced a 93 percent reduction in data center electricity use, significantly improved rack performance and provisioning time, and created a new business – the “RIOT” domain or Railway Internet of Things.
“As part of a 150-year-old industry, we recognise that the future of rail is tied to technology,” said Jason Brown, CIO, Rio Grande Pacific. “Railroads are in need of real-time information in order to make rapid decisions. Combining several systems into one single dashboard though our RIOT domain provides a holistic view to customers and helps keep the trains running on time. These new services, using the most modern technology, sets Rio Grande Pacific apart from the competition and has led to strong growth.”
Bank Leumi, Israel's oldest and leading banking corporation, is also experiencing the benefits of IT Transformation, bringing to life its mobile-only bank, Pepper. The organisation set out to create a platform that provides customers with a better experience, engages them quicker and reaches a new generation of clients. In order to do this, the company needed a faster, more flexible infrastructure and began leveraging a hybrid cloud model and software-defined data center. This has allowed them to move code from development to production within hours, compared to weeks, establish new environments faster and do this at less cost. This has helped them to bring a new, innovative product to market.
“We are in the midst of an era of digital disruption, where customer demands and expectations are changing rapidly,” said Ilan Buganim, Chief Technology and Chief Data Officer, Bank Leumi. “We as a bank need to adapt ourselves and continue providing a superior customer experience. We saw the opportunity to do this with our new mobile-only bank, ‘Pepper.’ Moving to a hybrid cloud model and a software-defined data center environment provided the infrastructure needed for real-time banking, with the ability to run fast and to shortcut the time to deliver new functionalities - thus making this new customer experience possible.”
It is an exciting time to be a part of the global data center industry—the world's ever-increasing demand for data will continue to drive growth exponentially.
In 2017 alone, global M&A transaction values in the sector exceeded US$25 billion. A new report by global law firm White & Case, examines how data centers have established themselves as a core member of the ‘alternatives’ real estate sector alongside logistics, self-storage, student housing, healthcare and hotels and leisure.
The report is authored by partner James Dodsworth, who leads the Firm’s real estate practice worldwide. James discusses how, for the first time, European governing bodies are beginning to direct and shape the future landscape of the data center industry, following Equinix’s acquisition of UK-listed operator TeleCity Group plc. The deal reportedly doubled Equinix’s European offering, with the addition of 40+ data centers to its European portfolio. Whilst data localization laws that have been implemented by governments worldwide present various challenges to the data center industry, they also create significant opportunities for data center providers that are able to supply the services that enable their multinational clients to satisfy the requirements of these laws. Industry commentators certainly continue to expect the sector to grow.
James Dodsworth says: “There are very few asset classes today that have the global reach, growth trajectory and consequent investor appeal that data centers currently enjoy. In this increasingly connected world, the ability to manipulate large scale data is critical to business efficacy, development and success.”
The next Data Centre Transformation events, organised by Angel Business Communications in association with DataCentre Solutions, the Data Centre Alliance, The University of Leeds and RISE SICS North, take place on 3 July 2018 at the University of Manchester and 5 July 2018 at the University of Surrey. The programme is nearly finalised (full details via the website link at the end of the article), with some some top class speakers and chairpersons lined up to deliver what is probably 2018’s best opportunity to get up to speed with what’s heading to a data centre near you in the very near future!
For the 2018 events, we’re taking our title literally, so the focus is on each of the three strands of our title: DATA, CENTRE and TRANSFORMATION.
This expanded and innovative conference programme recognises that data centres do not exist in splendid isolation, but are the foundation of today’s dynamic, digital world. Agility, mobility, scalability, reliability and accessibility are the key drivers for the enterprise as it seeks to ensure the ultimate customer experience. Data centres have a vital role to play in ensuring that the applications and support organisations can connect to their customers seamlessly – wherever and whenever they are being accessed. And that’s why our 2018 Data Centre Transformation events, Manchester and Surrey, will focus on the constantly changing demands being made on the data centre in this new, digital age, concentrating on how the data centre is evolving to meet these challenges.
We’re delighted to announce that Adam Beaumont, Visiting Professor of Cybersecurity at the University of Leeds, and CEO of aql, will be delivering the Simon Campbell-Whyte Memorial Lecture. Has IT security ever been so topical? What a great opportunity to hear one of the industry’s leading cybersecurity experts give his thoughts on the issues surrounding cybersecurity in and around the data centre.
We’re equally delighted to reveal that key personnel from Equinix, including MD Russell Poole, will be delivering the Hybrid Data Centre keynote addresss at both the Manchester and Surrey events. If Adam knows about cybersecurity, it’s fair to say that Equinix are no strangers to the data centre ecosystem, where the hybrid approach is gaining traction in so many different ways.
Completing the keynote line-up will be John Laban, European Representative of the Open Compute Project Foundation.
Alongside the keynote presentations, the one-day DCT events will include:
A DATA strand that features two workshops - one on Digital Business, chaired by Prof Ian Bitterlin of Critical Facilities and one on Digital Skills, chaired by Steve Bowes Phipps of PTS Consulting.
Digital transformation is the driving force in the business world right now, and the impact that this is having on the IT function and, crucially, the data centre infrastructure of organisations is something that is, perhaps, not as yet fully understood. No doubt this is in part due to the lack of digital skills available in the workplace right now – a problem which, unless addressed, urgently, will only continue to grow. As for security, hardly a day goes by without news headlines focusing on the latest, high profile data breach at some public or private organisation. Digital business offers many benefits, but it also introduces further potential security issues that need to be addressed. The Digital Business, Digital Skills and Security sessions at DCT will discuss the many issues that need to be addressed, and, hopefully, come up with some helpful solutions.
The CENTRES track features two workshops on Energy, chaired by James Kirkwood of Ekkosense and Hybrid DC, chaired by Mark Seymour of Future Facilities.
Energy supply and cost remains a major part of the data centre management piece, and this track will look at the technology innovations that are impacting on the supply and use of energy within the data centre. Fewer and fewer organisations have a pure-play in-house data centre real estate; most now make use of some kind of colo and/or managed services offerings. Further, the idea of one or a handful of centralised data centres is now being challenged by the emergence of edge computing. So, in-house and third party data centre facilities, combined with a mixture of centralised, regional and very local sites, makes for a very new and challenging data centre landscape. As for connectivity – feeds and speeds remain critical for many business applications, and it’s good to know what’s around the corner in this fast moving world of networks, telecoms and the like.
The TRANSFORMATION strand features workshops on Automation (AI/IoT), chaired by Vanessa Moffat of Agile Momentum and The Connected World. together with a Keynote on Open Compute from John Laban, the European representative of the Open Compute Project Foundation.
Automation in all its various guises is becoming an increasingly important part of the digital business world. In terms of the data centre, the challenges are twofold. How can these automation technologies best be used to improve the design, day to day running, overall management and maintenance of data centre facilities? And how will data centres need to evolve to cope with the increasingly large volumes of applications, data and new-style IT equipment that provide the foundations for this real-time, automated world? Flexibility, agility, security, reliability, resilience, speeds and feeds – they’ve never been so important!
Delegates select two 70 minute workshops to attend and take part in an interactive discussion led by an Industry Chair and featuring panellists - specialists and protagonists - in the subject. The workshops will ensure that delegates not only earn valuable CPD accreditation points but also have an open forum to speak with their peers, academics and leading vendors and suppliers.
There is also a Technical track where our sponsors will present 15 minute technical sessions on a range of subjects. Keynote presentations in each of the themes together with plenty of networking time to catch up with old friends and make new contacts make this a must-do day in the DC event calendar. Visit the website for more information on this dynamic academic and industry collaborative information exchange.
Vote Now for DCS Awards 2018 – online voting closes 11 May.
With thousands of votes already cast for this year’s DCS Awards, the competition is hotting up. Online voting stays open until 17.30 on Friday 11 May so make sure you don’t miss out on the opportunity to express your opinion on the companies, products and individuals that you believe deserve recognition as being the best in their field.
Voted for by the readership of the Digitalisation World portfolio of titles, the Data Centre Solutions (DCS) Awards reward the products, projects and solutions as well as honour companies, teams and individuals operating in the data centre arena.
Winners of this year’s 21 categories will be announced at a gala ceremony taking place at London’s Grange St Paul’s Hotel on 24 May.
All voting takes place on line and voting rules apply. Make sure you place your votes by 11 May when voting closes by visiting: http://www.dcsawards.com/voting.php
The full 2018 shortlist is below:
Data Centre Energy Efficiency Project of the Year
Romonet supporting Fujitsu UK & Ireland
Riello UPS supporting the Rosebery Group
EcoRacks Data Centre supported by Asperitas
London One Data Centre by Kao Data
New Design/Build Data Centre Project of the Year
Inzai 2 by Colt Data Centre Services
University of Exeter University supported by Keysource
Data Hub, Biel supported by Schneider Electric
Kao Data London One supported by JCA Engineering
Data Centre Consolidation/Upgrade/Refresh Project of the Year
EcoRacks supported by Asperitas
Willis Towers Watson supported by Keysource
Generator Control Panel Replacement by CBRE DC Solutions
Consolidation and expansion Africa Data Centres
New data halls in Corsham and Farnborough by CBRE, ARK and Corning Optical Communications
Data Centre Fire Protection by Bryland Fire Protection Ltd
Data Centre Power Product of the Year
Liebert® APM 30-600 kW by Vertiv
Delta 500kVA UPS by Eltek Power
Integrated Terminal Lug Temperature Sensors by Starline UE
Micro Data Center by Optimum Data Cooling
Data Centre PDU Product of the Year
Intelligent Power Distribution Unit (iPDU) Family by Excel Networking solutions
SmartZone G5 Intelligent PDUs by Panduit Europe
High Density Outlet Technology (HDOT) by Server Technology, a brand of Legrand
Data Centre Cooling Product of the Year
Liebert® PCW High Chilled Water Delta T by Vertiv
En-10 DX by Optimum Data Cooling
1U immersion cooled server by Iceotope Technologies
Oasis Indirect Evaporative Cooler by Munters
Data Centre Facilities Automation and Management Product of the Year
Nlyte 9.0 Data Center Infrastructure Management (DCIM) Solution by Nlyte Software
Micro Data Center by Optimum Data Cooling
Diris Digiware Power Metering and Monitoring System by Socomec
Data Centre Safety, Security & Fire Suppression Product of the Year
303 ECO SSF cabinet by Dataracks
IG55 Extinguishing System by Bryland Fire Protection Ltd
Data Centre Cabling & Connectivity Product of the Year
4K HDMI Single Display KVM over IP Extender by ATEN Technology
EDGE™ Mesh Modules by Corning Optical Communications
LABACUS INNOVATOR SOFTWARE & Fox-in-a-Box by Silver Fox Ltd
Data Centre ICT Storage Product of the Year
Anti-Ransomware Data Protection by Asigra
GridBank's Enterprise Data Management Platform by Tarmin
Computational Storage Solutions by Scaleflux Computational Storage
JovianDSS by Open E
Cohesity DataPlatform by Cohesity
StorPool Storage by StorPool
Data Centre ICT Security Product of the Year
Automated Endpoint Security and Incident Response by Secdo
Cloud Protection Manager by N2W Software
SecuStack by SecuNet Security Networks
Data Centre ICT Management Product of the Year
Tarmin GridBank by Tarmin
VirtualWisdom 5.4 by Virtual Instruments
Ipswitch WhatsUp Gold® 2017 Plus by Ipswitch
HC3 platform by Scale Computing
EcoStruxure IT by Schneider Electric
ParkView by Park Place Technologies
Data Centre Cabinets/Racks Product of the Year
Environ CL Series by Excel Networking Solutions
Knürr DCD Rear Door Heat Exchanger by Vertiv Integrated Systems GmbH
303 ECO SSF cabinet by Dataracks
HyperPod Rack Ready System by Schneider Electric
Data Centre ICT Networking Product of the Year
PORTrockIT by Bridgeworks
Unity EdgeConnect by Silver Peak
Secure Cloud-Native Networking by Meta Networks
Data Centre Hosting/co-location Supplier of the Year
Colt Data Centre Services
Volta Data Centres
Data Centre Cloud Vendor of the Year
Data Centre Facilities Vendor of the Year
Excellence in Data Centre Services Award
Park Place Technologies
4D Data Centres Ltd
Data Centre Energy Efficiency Initiative of the Year
EU Horizon 2020 EURECA Project
Data Centre Innovation of the Year
Cloud Protection Manager by N2W Software
ParkView by Park Place Technologies
Green Peak – Dashboard by Green Mountain
HyperPod Rack Ready System by Schneider Electric
Data Centre Individual of the Year
Anuraag Saxena, Ekkosense
Konkorija Trifonova, CBRE
Ole Sten Volland, Green Mountain
Dan Kwach, East Africa Data Centre
Although GDPR is probably the best-known example, a wave of regulation and compliance legislation is being enacted across the world, and particularly in Europe, as regulators get to grips with the modern data economy. This can mean conflicting requirements in some territories, or confusing messages for customers and organisations.
The trend in previous years towards a reduction in regulation seems to have ground to a halt. And while the tone and mood of the new rules such as GDPR is seen to be persuasive and “nudging” by those at the senior levels of policy-making, their actual implementation could well see a “big stick approach” by local and national law-makers.
GDPR itself already promises swingeing fines and there is every chance that prominent and perhaps not-so-prominent companies and organisations may be made examples of with some headline-grabbing penalties. Suppliers of Managed Services will have many new responsibilities and may well find themselves in the firing line as the legal implications take their courses.
This is the main point behind the latest speaker announcement for the European Managed Services & Hosting Summit 2018, to be held in Amsterdam on May 29, 2018. A full session will be devoted to the issue of working across the rising tide of compliance in Europe. With all indications that many MSPs are looking to expand by partnering or acquiring operations in other geographies, this will be an essential item for discussion at senior levels. Any senior figure in a managed services company will need to be familiar with both the processes and implications of the new levels and nature of compliance requirements in any territory they are working in, and beyond.
GDPR is not the only game in town, says Ieva Andersone, a senior associate from Sorainen, a major legal firm in the IT industry, based in the Baltics, a region with a high degree of interest in pan-European business relations and one of the fastest growing regions in IT generally and in managed services adoption. Parts of Europe, even parts of countries, will have their own local rules or GDPR interpretations, she will argue, which managed services companies will need to be aware of, and which may well apply to IT projects with connections outside their core territory. As an experienced, Cambridge-educated lawyer working in multiple cultures and markets in Europe, her presentation discusses the nature of the regulations, their intentions and direction and how they may affect suppliers of services, including managed services, in unexpected ways.
With plenty of discussion points on how to keep the MSP business on the right side of the law, and with guidance as to strategies to adopt, the annual Managed Services and Hosting Summit (MSHS) on May 29 in Amsterdam always aims to use experts to advise European MSPs on these major issues. The first keynote presentation, from Gartner, will address the key issue of how MSPs can differentiate themselves in an increasingly competitive market.
This MSHS event offers multiple ways to get answers: from plenary-style presentations from experts in the field to demonstrations; from more detailed technical pitches to wide-ranging round-table discussions with questions from the floor. There is no excuse not to come away from this key event with ideas for a strategy to keep the business out of trouble.
One of the most valuable parts of the day, previous attendees have said, is the ability to discuss issues with others in similar situations, and attendees are all hoping to learn from direct experience.
In summary, this is a management-level event, held in English, designed to help MSP and channel organisations identify opportunities arising from the increasing demand for managed and hosted services and to develop and strengthen partnerships, while keeping up with the latest compliance and legal requirements in multiple markets.
Registration is free-of-charge for qualifying delegates - i.e. director/senior management level representatives of Managed Service Providers, Systems Integrators, Solution VARs and channels. More details: http://www.mshsummit.com/amsterdam/register.php
In this month’s DCA journal we will be focusing on data centre security both physical and cyber. I’d like to address the issue of cyber Security first.
By Steve Hone, CEO & Founder The DCA
I spotted a billboard on the tube the other day that claimed you were 40% more likely to be a victim of cyber crime than to have your house robbed. This claim was backed up by the Office for National Statistics who have seen a steady rise in reported cybercrime year on year, with more than 6m incidents of cybercrime being reported each year.
This is far more than previously predicted and enough to nearly double the headline crime rate, that equates to more than 40% of all crimes committed in England and Wales.
Data centres represent a very attractive target for criminals. If someone manages to breach these defences, the data halls should be protected by a host of biometric security systems, man-traps and other security protocols, meaning physical access to the servers is in no way guaranteed, however this assumes that the criminal has a crowbar, swag bag and wearing a balaclava. What happens if the attacker is not planning on abseiling across the roof tops and dropping in through the air duct; what if he can break into your facility and steal your data or plant a virus/DDos all done from the comfort of his or her armchair with out you even knowing about it.
According to a Cyberthreat report, business-focused cyber-attacks – including ones specifically targeting datacentres – has increased by 144% in the past four years and data centres have become the number one target of cyber criminals, hacktivists and state-sponsored attackers. Although physical security should remain a top priority for datacentre operators, equal consideration needs to be given to the increasing threat posed by cyber-attacks with the same level of due care and attention.
Although I personally do not confess to be an expert when it comes to cyber security the good news is as the saying goes “I know someone who is” if fact the DCA has access to lots of members who could help so If you would like to speak to a specialist then the Trade Association can facilitate this for you.
That leads us nicely in to centre physical security
The aim of physical data centre security is, to keep out the people you don’t want in your building or accessing your data. Simply put if you are not on the list you can’t come in. Assuming their name is on the list; equally important once inside is to continue to keep an eye on them. If you discover that someone be it a customer, contractor or even staff member is guilty or suspected of committing a security breach, identify them as soon as possible - containment of the situation is paramount.
Through the Data Centre Trade Association, you have access to a wealth of specialists and experts and I would especially like to thank Datum, South Co, Chatsworth and EMKA who have all summitted articles in this month’s edition of the DCA journal.
When looking at physical security for a new or existing data centre, its sensible to first take 4-4 steps back and perform a risk assessment of the actual data and equipment that the facility will hold. Fully understanding the risks and potential breaches that could occur is essential; as is establishing the likelihood of such a breach taking place and the impact it could have on your business (be that reputational or financial). This type of drains up assessment should be your first port of call when defining your physical security requirements and determining how far you need to go.
I have often heard it said that “security of any facility needs to be like and an onion” made up of multiple layers of security which emanate out from what it is you are trying to protect.
When we are talking in terms of a physical data centre what typically makes up the layers of a data centre onion?
Keep a low profile: Especially in a populated area, you don’t want to be advertising to everyone that you are running a data centre.
Avoid windows: There shouldn’t be windows directly onto the data floor.
Fencing: Granted not always possible if located in a city location, which is where the avoid windows advice kicks in (see above); however, if you are going to have fencing make sure it not just a token gesture. There are plenty of guidelines when it comes to security fencing and the Trade Association can point you in the right direction in the form of fellow members who can offer you guidance as required.
Limit entry points: Access to the building needs to be controlled. Think not just main entrance and fire exits and loading bays but also roofs access points as well.
Fire Exits: When it comes to those fire exits make sure they are exactly that “exit only” (and install alarms and monitoring on them, as they are often frequented by smokers popping out for a crafty one, who then politely hold the door open for a stranger to wander in). I’m not saying this could happen by the way, I know it does happen.
Hinges on the inside: Which makes it far too easy for someone to pop the pins out to gain access. Sounds basic but this is a common mistake I often see with repurposed buildings.
Tailgating: Following someone through a door before it closes know as ‘tailgating’ is one of the main ways that an unauthorized visitor will gain access to your facility. By implementing man-traps that only allow one person through at a time, you force visitors to be identified before allowing access.
Smile you are on camera :0) You can never have enough cameras - CCTV cameras are a very effective deterrent for an opportunist as is proximity flood lighting. All footage should be stored digitally and archived offsite, don’t forget new GDPR rules BTW.
Access control: You need granular control over which visitors can access certain parts of your facility. The easiest way to do this is through proximity access card readers, biometrics, retinal scans on the doors.
Pre-approval and Personal Identification: Many data centres operate on a pre-approval system whereby you advance warn the DC that someone will be attending site and normally this person will need to show some form of photo ID (driving licence, ID card or passport) Cast iron rule “no ID = no entry” irrespective of how much they protest.
Compound entry control: Access to the facility compound, be that pedestrian or vehicle via a parking lot, needs to be strictly controlled either with gated or turn style entry that can be opened remotely by the reception/security once the person/driver has been identified. Ram Raiders don’t just target retail stores, metal bollards or large boulders can just as effectively act as a protective exterior layer to prevent a vehicle itself being used as a 15th Century battering ram.
Processes and training: This might sound out of place in this list of essentials; but having all the security layers in the world will be worthless unless you have the processes and procedures documented and have your staff vetted and trained to prevent security breaches from happening and this needs to include any 3rd party contract staff you employ.
You can never test enough: It’s only by regular testing and auditing of your security systems that any gaps will be identified before someone else can exploit them.
At the end of the day both cyber and physical security considerations come down to managing risk so make sure you carry out a regular risk assessment, try to think of data centre security like an onion and remember not all Burglars wear balaclavas.
Thank you again to all the contributors in this month’s edition. Next month’s journal theme is focusing on Energy Efficiency and by then I will also be able to report on Data Centres North from Manchester. Deadline for article submission will be the 15th May.
Simon Williamson, Business Development Manager, Electronic Access Solutions, EMEIA (SOUTHCO) TBA
In today’s world, data is fast becoming the new global currency and as data volumes continue to grow at an exponential rate, the issue of data security continues to cause concern within the industry. While there is widespread awareness of the many digital attacks that compromise data, less is said about the physical threats to information stored in data centres.
Despite extensive measures in place to secure the perimeter of a data centre, often the biggest threat to security can come from within. It is not uncommon for individuals entering these facilities to cause accidental security breaches. In fact, IBM Research states that 45 percent of breaches occur as a result of unauthorized access, costing over $400B annually.
This issue is particularly prevalent for colocation data centre providers, who host data cabinets for multiple clients. Each server cabinet should be secured at the rack level with access only granted to authorized personnel. Traditionally, access to individual racks has been protected by key-based systems with manual access management. In some instances, data centre managers have turned to a more advanced coded key system, but even this approach provides little in the way of security—and no record of who has accessed the data centre cabinets.
Electronic Access Solutions Enhance Physical Security
To alleviate the problem of unauthorized access and concerns surrounding data security, traditional security systems are quickly being replaced by intelligent electronic access solutions. Above all, these solutions provide a comprehensive locking facility while offering fully embedded monitoring and tracking capabilities. They are a vital element of a fully integrated access-control system, bringing reliable access management to the individual rack. The system also enables the creation of individual access credentials for different parts of the rack, all while eliminating the need for cages, thereby saving costs.
Any physical-security upgrade in the data centre has its issues, of course. Uninstalling existing security measures in favour of new ones costs both time and money, which is why data centre owners are turning toward more-intelligent security systems such as electronic-locking swinghandles, which can be integrated into new and existing secure server-rack facilities. They employ existing lock panel cut outs, eliminating the need for drilling and cutting. This approach allows for lock standardisation in the data centre, saving considerable time (and therefore cost)—something that holds real value given the pressing demand for data centre services.
Physical access to the rack can be obtained using an RFID card, pin code combination, BLUETOOTH®, Near Field Communication (NFC) or biometric identifications. The addition of a manual override key lock allows emergency access to the server cabinet. Even in the event that security needs to be overridden, an electronic access solution can still track the audit trail, monitoring time and rack activity. Solutions such as this have been designed to lead protection efforts against physical security breaches in data centres all over the world.
By enhancing security at the individual rack level, providers can restrict rack access to only those with authority, which is especially relevant in colocated data centres, where data cabinets are under threat from both accidental and malicious breaches. Installing the right electronic access solution can help to eliminate costly breaches in a short time-frame and maximize colocation security.
For more information about Southco’s Data Centre security solutions visit www.racklevelsecurity.com.
By Luca Rozzoni, European Business Development Manager, Chatsworth Products (CPI)
Whilst security has always been a key consideration for the data centre industry, the upcoming EU General Data Protection Regulation (GDPR) – a strict set of regulations set to protect data privacy – means that data protection and security policies have taken on a new level of priority.
Regulatory and Compliance Requirements
The GDPR requirements will come into force on 25th May 2018 and affect organisations worldwide. Whilst EU countries must comply, any organisation collecting or processing data for individuals within the EU should also be developing their compliance strategy. The UK Government has indicated that, even taking Brexit into account, it will implement an equivalent set of legislation and UK organisations must review their security practices in regards to the protection of personal data and consider their own routes to compliance.
So How Should Data Centres Prepare?
Whilst organisations are expected to use their own judgment in regards to making sure they have taken the ‘appropriate technical and organisational measures’ to ensure compliance, Regulation (EU) 2016/679 stresses the need for secure IT networks, and provides an example of “preventing unauthorised access to electronic communications networks and malicious code distribution and stopping ‘denial of service’ attacks and damage to computer and electronic communication systems.” Put simply, whilst access control may seem an obvious part of any security policy, data centres must be able to demonstrate that they have the appropriate access policies in place.
Cabinet-level security has always been an important part of data centres’ data protection and security policies. Strict regulatory compliance requirements, such as HIPAA in health care and PCI DSS in online retail, demand audit logs of every access attempt as part of physical access control to help ensure data privacy and security. Automatic logging of cabinet access is also important, given that a large portion of attacks within these industries (58 percent in the financial and 71 percent in the health care, to be more precise) are carried out by insiders advertently or inadvertently, according to a 2017 report by IBM X-Force.
This makes sole reliance on mechanical keys not effective at best and, at worst, has the potential of resulting in privacy related lawsuits.
Electronic access control (EAC) solutions are essential in addressing user access management issues within the data centre and can be an extremely cost-effective method of delivering intelligent security and dual-factor authentication to the cabinet.
Key features to look out for when selecting an EAC solution include:
Dual-factor authentication enables data security to be taken to the next level. One of the most secure forms of physical access verification is biometric authentication. However, many organisations have dismissed this in the past due to cost, as it typically requires additional readers to be installed to every cabinet or facility door.
A cost-effective and secure dual-factor authentication solution is a fingerprint-activated card that is able to work with existing EAC or other card-activated locks. A card that is compatible with readers for 125 KHz, HID ICLASS and MIFARE® proximity cards and can work with existing campus security systems eliminates the need for expensive deployments and means data centre employees only need to carry a single card.
Remote Management and Reporting
Using a simple, user-friendly web interface to remotely manage the networked EAC locks allows the user to remotely monitor, manage and authorise each cabinet access attempt. Crucially, using this type of intuitive interface provides an audit trail for regulatory compliance through log reports. The logging report can be easily exported and emailed to the administrator.
Managing the networked EAC locks through the web interface also reduces the need for wiring the electronic access systems to expensive security panels which are usually managed through Building Management Systems.
Data centres can realise dramatic savings in networking costs and deployment times through the ability to network several locks through IP consolidation. It is now perfectly feasible to choose a solution that will allow up to 32 EAC controllers (32 cabinets) to be networked under only one IP address.
Combining EAC with Environmental Monitoring
Choosing an EAC solution which offers added benefits, such as environmental monitoring, can ensure a much faster return on any initial investment, especially when you consider the savings which can be made by utilising one IP port for an appliance that offers both EAC and environmental monitoring.
There are solutions available which can monitor and manage both temperature and humidity through the same web interface, issuing proactive notifications to help data centre managers ensure service reliability by taking action before issues turn into downtime.
The infrastructure can be badly affected by water, dust and other harmful particles so it is worth looking for a solution which also has the capability to monitor and detect smoke, water and even motion.
As outlined in Regulation (EU) 2016/679, ‘rapid technological developments and globalisation have brought new challenges for the protection of personal data’ and ‘the scale of the collection and sharing of personal data has increased significantly’.
As a result, customers’ needs and expectations regarding privacy of data have increased, as has the sophistication of the threats now posed. Data centres must look for more powerful and effective methods of delivering peace of mind to the customer as well as compliance to new and emerging regulations and electronic access control (EAC) solutions are a key weapon in their arsenal. Fortunately, delivering intelligent security and dual-factor authentication to the cabinet is no longer out of reach for organisations needing to meet strict budgets.
Dr. Nigar Jebraeili Research Assistant at University of East London (UEL)
Today we see a small number of progressive (some would say maverick) fully implemented software-defined data centres (SDDC) in operation. While these front runners lead the way, right behind them is a huge number of enterprises being in another vortex which will force them to adopt these new ways sooner than we had expected. This can be seen as a classic IT market development pattern.
Today, roughly 70% of the IT organisations are involved with some form of server virtualisation. This automatically puts the user of such techniques in the top wide section of a funnel & there is only one way for them to go: Down the defined path!
As data centres evolve to embody the new generation of modern IT technologies, they are expected to offer the benefits of fully integrated solutions, provide services for pre-sales and post-sales, allow for continues modifications of standard product and enable easy sourcing, etc. as they offer packages of ultimate reliability and flexibility. However security is a decisive factor that remains the centre of attention for both parties, the users as well as the providers.
New figures suggest that each year, businesses lose $400 billion to hackers! Hence, security remains amongst the main concerns of IT organisations. Choosing how to construct a private, public or indeed a hybrid cloud is probably amongst the most critical strategic decisions to make for IT leaders nowadays and to a great degree it determines the enterprises’ flexibility, reliability and competitiveness.
Today we hear about a relatively new trend in data centres called SDDC (Software-Defined Data Centre). I say a “relatively” new, as the concept is based on the containerisation techniques that have been around since early 1980s. The SDDC’s agile platform not only enables IT organisations to adapt with the high speed of rapid business growth through vitalizing workload, networking as well as storage, but this new concept also can offer a high level of security that was hard to achieve previously.
The question might arise, what are the implications of SDDC on the overall security in Data centres?! To name a few, here I will lightly touch on a few benefits of SDDC that can have a positive impact on the overall security regulations.
As the nature of SDDC implies, generally there are three main aspects of security to be identified:
1. Workload-specific security aspects;
2. Network-specific security aspects;
3. Storage-specific security aspects;
A single configuration mistake in a traditional data centre can lead to a totally dysfunctional data centre, whereas a major advantage of SDDC network security is the unified controller that is in charge of various aspects of the network functionality including the security functions. Hence consolidation of policies can happen across the SDDC data centre infrastructure.
As its main characteristics, SDDC allows for a high degree of automation which minimises the human intervention/ error as opposed to traditional data centres which are inherently error-prone and dependant on brittle physical characteristics, even if using centralised management applications. This is particularly the case where there are recurring configuration tasks and various rules distributed across infrastructure that need manual configuration.
SDDCs on the other hand consistently enforce a high level of policy-based automation that facilitates not only the configuration tasks, but more importantly swift “reconfigurations” of enforced regulatory compliances, which can result in high levels of security and the ability to sustain the rapid changes demanded by today’s business environment and moreover reduce the typical risk of out of date security policies.
Another positive advantage of workload security in SDDC is switching from the traditional legacy security boundary to the SDDC’s functional boundary. It improves the visibility of security software such that it is possible to control the data and the workload’s behavioural patterns, by consistent monitoring, blocking and immediate confrontation of the threats. As a result, it creates a stronger and more reliable security as opposed to traditional legacy security boundaries such as outdated design of DLP (Data Loss Protection) and/or IPS (Intrusion Prevention System) which mostly focus on protecting the borders rather than context and flow. Having said that, adding robust internal layer of security requires consistent policy enforcements, which once again is another affirmation on the importance of automation in SDDCs.
Last but not least, in order to benefit from SDDC’s security advantages it is decisively important to ensure the security models applied to virtualised environment are adequate SDDC-aware security tools, adapted and designed to meet the requirements of these virtualised, centralised and fast-paced environments.
The nature of threats we face change almost as fast as we develop systems & thus the agility with which we respond is the key to our protection. With all the advantages of the new paradigm of SDDC one is still left to find ways of assessing the vulnerabilities. The key becomes the evaluation of the quality of automation, & ease of manipulating central control to adapt to new regulations we wish to impose on the system. Perhaps now is the time to embrace a new vocation that can potentially elevate the industry, by taking the initiation of nurturing & training in a systematic way if we are to make the transition with less risk.
Lexie Gower, Datum Datacentres
GDPR – burgeoning cloud storage – cyber-attacks – ransomware. We live in a digital world with ever increasing digital threats and stringent digital regulations because we are our data. Everything we do as individuals and as corporations creates data and leaves a data trail. This vital information is our strength and our Achilles heel. People responsible for managing, processing and storing data are the gatekeepers to more than just interesting facts, they have the keys to our lives.
A recent survey on behalf of the Information Commissioner’s Office found that only one fifth of the UK public (20%) have trust and confidence in companies and organisations storing their personal information – and in the meantime, the world continues to be a dangerous place, both on the ground and in the cybersphere.
Organisations of all kinds are highly exercised working out ways to ensure they do not fall foul of the DPA or the GDPR, and there is a strong focus on ensuring corporate reputations and customer confidence are not flattened by the negative consequences of a data leak. One drastic solution could be to wipe all the data…. but as the organisation would no longer be a viable functioning entity, it is unlikely to be very popular!
As with all exercises, actual solutions are a mix of people, processes and tools, all supported by responsibility and due diligence. When everything is held in-house, all necessary precautions can be taken, but at a probably untenable cost. Opting for “everything in the cloud” offers the flexibility and power we may want at a more controlled cost, but on the flip side, the security risks are undoubtedly harder to guarantee which may be unacceptable for some data. Hybrid solutions combine the cost advantages and flexibility of the cloud with the ability to apply more rigid safeguards where the information is critical.
Cybersecurity is not just air
Cyber security is the body of technologies, processes and practices designed to protect networks, computers, programs and data from attack, damage or unauthorized access. In a computing context, the topic of security includes both cybersecurity and physical security as, regardless of the solution pursued, at some stage the data will be stored, processed and managed in a physical entity on the ground. Whilst many sparkling brains and enormous investments go into developing the tools and apps that help to shield networks and data from unwanted intruders, the solid physical base needs more than a padlock on the door. Too many data centres treat security as a box-ticking exercise whereas real confidence in physical security can only be justified where multiple layers of people, processes, tools and diligence are invested, implemented, and accredited.
Whilst all organisations are obligated to safeguard their data, those that hold particularly sensitive information are further compelled to ensure they do not slip up, perhaps because the threat to them is greater, or perhaps because of additional regulations or requirements. Whatever the driver, a data centre business that can attract and retain such organisations has to pay considerably more than lip service to the notion of security. Take Datum as a case in point. Built in a secure List-X park, with full perimeter security and CCTV, permanently guarded security gates and highly controlled access, the data centre itself provides further 24x7x365 manned security, CCTV inside and out, building entry controls and biometrically regulated data hall access. For those clients with even greater concerns, dedicated locked cages, and even cages within cages, are provided within the data hall. The overall business model and approach to security, and to service, has ensured that major national and international clients have audited and tested the security before, during and after moving their kit into the data centre.
Accreditations speak volumes
For other organisations who are seeking secure solutions, the tip is to select a data centre that is built and run in the way that you would run the data centre if it was yours. Taking a provider’s word for it is like entrusting your prized Bugatti to a teenager who says they know how to drive. Always ask someone who has let them borrow their prized possession first, and don’t let the keys out of your hand until you have seen both a driving licence and an Advanced Motoring certificate. Data centres can promise whatever you want to hear but client recommendations, third party accreditations and a tyre-kicking trip to site are vital.
By Andy Billingham, Managing Director – EMKA (UK) Ltd
In the area of industrial security new demands drive a continuous development process in tandem with new materials and production technologies. The demand can perhaps most easily be categorised as:
In turn these have an effect on usage of materials and the design concept. In this respect the trend is toward increasing sophistication – it’s no longer acceptable to open a control or data cabinet with a screwdriver if you don’t have the key! So where once a wing knob latch was sufficient it is important to consider the need for keylocks – perhaps to IP65 or even IP69 and the option of vibration resistant compression locks which prevent nuisance door opening, as well as more complete gasket pull-down and consistently higher IP sealings.
Compression lock technology has tended to be rather confined to a limited range of applications until a few years ago it came out of patent protection and now a much wider market is finding it beneficial. EMKA have our own version of this technology and are able to bring our special expertise in design/manufacture to provision of many compression latch variants.
The question is simply “is a compression facility beneficial and where?” Well certainly the anti-vibration role of these locks is excellent in preventing opening of panels such as on trucks, railway rolling stock gensets, aircon and heating and ventilating systems. The answer is increasingly “yes”.
Compression lock/latches are valuable in environments where health and safety are critical in that people must be protected from the equipment they are operating and sometimes even from themselves. This is often a high priority where parallel developments in hygiene regulations have led to more use of stainless steel and designs without cavities which would collect debris and are so more easily cleaned – a move which has also led to increased use of high degrees of IP sealing to resist frequent high pressure washes. Here too developments in manufacturing technologies have enabled stainless steel and engineering plastics to be produced more cheaply, more accurately and with smoother more robust designs.
The ubiquitous ¼ turn latch lock changes incrementally as customers demand smooth, cavity-free designs suited to food processing plants and high sealing to withstand regular pressure washing.
Plastics are no longer in the dark ages and new generations of reinforced engineering grades enable tolerances to be reduced, leading to closer fitting, more robust assemblies which slide more easily with better operator feel and better sealing. These are now often the first port of call for corrosion resistant installations and so can frequently replace expensive metals.
Parallel developments continue elsewhere in enclosure hardware – it is amazing how usage has changed and how products have changed to meet those needs. For similar reasons – enhanced environmental requirements, cost and user friendliness – swinghandles are now produced with “O” rings and PUR seals giving excellent sealing for all applications. Glass reinforced polyamide was introduced as the industry developed slim, strong handle designs alongside stainless steel variants in AISI 304 or 316.
These reinforced machine grade plastics are extremely capable - such that robust anti- vandal designs are possible in these and in zinc die, sometimes in combination with components – often complimented by low profile escutcheons and inset handles for sealing and anti-tamper purposes.
We now have a variety of advanced mechanical solutions such as interchangeable lock cylinders which can be removed and replaced at any point in the installation process. Innovation with regard to mainstream control and equipment cabinets or enclosures is exemplified in the 1325 swinghandle design which takes modular flexibility to a new level in this market. Designed specifically for electrical/electronic cabinets the stylish 1325 enables lock selection even after installation with a complete range of inserts to match common industry requirements.
A significant feature here is this ability to swap the lock mechanism at any stage thus enabling flexibility at production and installation stages – even post installation. Along with a precision rod control system, the complete installation provides a quiet and robust operation resulting from optimal use of modern engineering plastics and manufacturing techniques.
On the question of sealing we can now source pre-cut, pre-assembled and vulcanised gasket, installation-ready without messy cutting and gluing which has significant positive implications for sealing levels. EMC gaskets are mainstream, while a major demand has been identified for fire protection and high temperature gaskets in EPDM and silicone.
With increasing use of technologically driven solutions in all fields of industry the need for basic mechanical security is expanding rapidly and the new biometric/digital/card based systems are finding their way “down” to levels where once a cheaper ¼ turn lock “would do”, but is no longer considered appropriate.
Simple electronically verified swinghandle based protection has been developed, along with networked systems which can be remotely monitored and authorised. The Agent E stand- alone wireless system is one approach for single or small numbers of cabinets – ideal for industrial controls.
In most high security locations security problems begin with the fact that keys and key cards can become separated from their authorised users. Any key or key card that is forgotten, lost, stolen, or otherwise separated from an authorised user represents a potential, undetected security breach, while the greater the number of keys and key cards in a given environment, the greater the possibility of unauthorised access to physical systems or data assets.
At the high tech end data has become extremely valuable so we have seen the need for 100% bullet-proof systems of monitoring, alarm and control leading to biometric technologies being applied at the cabinet door linked often to the internet via encrypted channels.
In these applications there is a vital need for accurate and reliable access logs and for deterrence from unauthorised ingress/vandalism/ theft of data - and this is where the BioLock,
using state-of-the-art fingerprint recognition in conjunction with PIN codes and RFID access cards, provides an extremely high 3 level security protection. This may be applied on an individual cabinet or on a designated block of cabinets with, for example, a group controller supplemented with separate cabinet release protocols. Multiple releases of separate panels on individual cabinets are catered for by means of linked ELock slave units. Ideally suited to the utilities environment, government or financial institutions.
In the world of high technology there are three key specs that distinguish the best products from the mediocre - Smaller, Faster and Simpler.
By Alastair Hartrup, Global CEO of Network Critical.
Over the last decade we have seen advancement from 1/10Gbps to 25Gbps to 40/100Gbps. These rapid advancements, while great for users, are challenging for network architects and managers who are tasked with monitoring, managing and protecting the networks and the information they move.
There are many specialised tools designed to monitor network data flows. Some of these tools are built for analysis and optimisation of network traffic. Some tools are designed to block malicious attacks and deter theft of confidential information. In either case, the job of these tools becomes exponentially more difficult as network speed increases. It is much more complex to open packets, read, analyse and take action when the data is moving at one hundred billion bits per second than when the data is moving at ten billion bits per second. The dramatic increase in network link speed has led to a corresponding increase in complexity and price of the tools being used to monitor and protect networks.
Just as advances in materials, processes, design and software have helped make the IC industry more efficient, the same is happening in the ancillary products being used to provide visibility to the tools that monitor and protect networks. As network tool speeds and prices escalate, products like the SmartNA-100 are providing new ways to provide visibility to high speed links using Smarter, lower cost, lower speed tools.
Newer software that can enhance the chip capabilities are a must for those looking for Faster acting ancillary products. Devices that not only have the most advanced chip technology but also have intuitive software that works with those chips in tandem. By utilising this powerful combination, the latest network data monitors provide visibility to 100G links with incredible flexibility that utilises both new and legacy management tools.
Remembering the sage advice of teachers and mentors…when faced with a large, complex problem or project, it is best to break it up into smaller, more manageable pieces. This advice works as well for packet brokers by increasing the number of ports with Gbps access. Then, when a sophisticated load balancing feature is deployed, high speed links can be accessed at up to 100Gbps which is then distributed down among the multiple ports where the tools are connected. By splitting up the high-speed traffic among as many ports as possible it becomes far more manageable to monitor and a Simpler experience overall.
When looking for the best traffic monitoring technology for enterprises, these three specs are the ones cybersecurity experts want to look for. When you have a product that is Smarter, Faster and Simpler to use than all the others out there you will find yourself with an easy to set-up device that is easy to use, cost-effective, efficient and can even work with older devices without sacrificing performance. Such devices may sound like a technological impossibility, but they do exist and have only been getting better over time. So, improve your network security by using the Smarter, Faster and Simpler solution.
While a few years ago “cloud” was a bit of a buzz word, cloud adoption has since become mainstream. These days, most organisations - ranging from large to small - use it to a certain extent. Yet still misconceptions remain about the technology. In particular, on how best to secure it.
By Chris Hill, Director, EMEA Public Cloud, Barracuda Networks.
We wanted to lift the lid on these misconceptions, so in February 2018 we conducted some global research. As part of this, we asked 164 respondents in EMEA about their experiences and attitudes when it comes to security in the cloud. Here are some of the standout findings:
People still believe on premises security is better than cloud
Over half (57%) report their on-premises security as superior to cloud.
However, using security tools specifically designed for the public cloud can actually make a business more secure than they were when they operated purely on-premises.What was promising was that the shared security model was largely well known by respondents, with 71% expecting cloud security to be a responsibility that’s shared with cloud vendors. Just 19% think cloud vendors are solely responsible.
An overwhelming 82% have concerns about deploying firewalls in the cloud, with 41% naming ‘pricing and licensing not appropriate for the cloud,’ and 39% citing ‘no centralised management creating a significant overhead’ as their top two concerns. Other concerns included next generation firewalls simply not being practical for cloud environments and the lack of integration with native security tolls from cloud vendors.Interestingly, organisations seem to find value in cloud-specific security features, with 95% saying cloud-specific firewall capabilities would help them. 71% cite the most beneficial quality as ‘integration with cloud management, monitoring, and automation capabilities,’ and 59% cite being ‘easy to deploy and configure by cloud developers’ as the second most beneficial capability.
Traditional security remains a bottleneck for DevOpsJust over half (58%) of respondents have adopted DevOps, DevSecOps, or CI/CD (continuous integration and continuous deployment) methodologies. This was slightly higher in EMEA than the US (53%), with APAC storming ahead with 63%.
Of the organisations that have adopted, 95% have faced challenges integrating security into those practices. The top challenge reported was ‘limitations with existing security solutions’. Security processes not being changed was also voted as a high scorer.So what does it all mean?
We’re continuing to see questions and concerns around how organisations should be approaching security along with their cloud deployments, especially from larger companies. There still seems to be a lack of understanding in cloud security, and a misplaced belief that on-premises security is a lot stronger.One thing is for sure: as the move to cloud only increases in pace, for organisations that are used to operating under traditional data centre architecture, moving to the cloud will require a new way of thinking when they approach security.
Alessandro Bruschini from Aruba S.p.A. takes a look at the impact green data centres have on the business, IT and renewables sectors.
It will come as no surprise that data centres consume enormous amounts of energy. The average data centre will have a connection to the power grid, uninterruptible power supplies and a backup diesel generator to cope with all manner of power demands.
By 2025, it is estimated that data centres will consume one fifth of all the electricity in the world. Unsurprisingly, this level of power consumption is worrying in terms of sustainability and environmental impact.
However, over the recent years there has been a concerted shift by data centre and cloud service providers to have ‘green strategies’ whereby they work to reduce the power consumption of their data centres and commit to making heavy use of renewable energy.
One example is Aruba’s Italy-based global cloud campus in Milan that runs entirely on renewable power from suppliers with Guarantee of Origin certification for their green energy, as well as generating its own power from an on-site hydroelectric power plant and solar panels. A geothermal system taps into groundwater for eco-friendly cooling.
Going green has obvious benefits for the environment and a company’s social corporate responsibilities. But it also has a bigger impact on the IT and renewable energy sectors as a whole.
A 100 per cent green data centre is effectively self-sufficient, with advanced campuses making their own energy from hydroelectric, wind, solar energy, or a combination of all three, to ensure their power is completely renewable.
At the same time advancements in techniques such as fresh air cooling can replace the need for air conditioning or water cooling to keep server rooms at optimal temperatures, reducing both energy consumption and water use for a data centre.
Not only does this massively reduce the carbon footprint of green data centres compared to ones reliant on fossil fuel power, it also significantly reduces the day-to-day running costs of a campus.
As such, data centre providers can offer server rackspace, outsourced IT or cloud services at more competitive prices, as the energy overheads are drastically reduced.
This could have a knock-on for businesses using such space and services, as their IT costs could be reduced allowing them to bolster their bottom line or offer their own customer-facing services at aggressive prices without damaging profits. At the end of the chain, users of cloud-based collaboration or storage services end up with a better deal, all while reducing the environmental impact of data centres on whole.
Essentially, going green makes solid business and environmental sense on a granular scale. But as it becomes a more adopted way of running data centres, its impact can felt further afield.
Beyond consolidation and the virtualisation of compute resources, as well as the continued growth in cloud adoption as a means to make IT operations increasingly efficient, more advanced green initiatives are seeing the rise of alternative data centre and IT architectures. These have an emphasis on exploring clever cooling techniques, shifting advanced workloads to the cloud, and tapping into scalable blade servers.
As this advances, more investment is likely to flow into green data centre development and the technologies that surround them.
Long-term investments in green technology have already seen the advent of modular data centres for scaling capacity up or down on demand, while minimising the environmental impact of the building. Some of the latest green data centres not only have heavy investments in establishing their own energy supplies, but also make use of energy recycling techniques; for example, heat from servers can be used to dehumidify the air used for natural cooling.
With the rise of green data centres the demand for renewable energy increases. Logic dictates that greater demand leads to greater investment, which could see the development of more advanced green power production; highly efficient photovoltaic tubes for example or greater use of geothermal energy.
Thanks to the trickle down nature of technology, renewable energy production could bloom in popularity with major suppliers leading to the delivery of cheaper green power, benefiting all and spearheading a true shift away from fossil fuel energy.
Regardless of how you look at it, green data centres have a large part to play in the future of IT and arguably the health of world’s environment.
Voracious appetites for data storage and compute consumption are ever-growing, so more data centres will inevitably be built. If left reliant on fossil fuel, this growth could plunge nations into an energy crisis that seeps from the digital realm into the physical world; but with green data centres front and centre, the benefits to businesses, the environment and humanity could be truly significant.
Energy bills account for up to 60% of a data centre’s overall operating costs. With demand for capacity getting bigger and bigger, improving operational efficiency throughout server rooms doesn’t just make environmental sense, it’s quickly becoming an economic imperative to stop electricity bills spiralling out of control.
By Chris Cutler, Corporate Account Manager, Riello UPS.
According to research from the Global e-Sustainability Initiative (GeSI), datacentres already consume over 3% of the world’s total electricity and generate 2% of our planet’s CO2 emissions. For context, that’s the equivalent of the entire global aviation industry.
And those requirements are certain to come under increasing pressure by the relentless rise of the ‘Internet of Things’ and Industry 4.0. Interconnectivity is quickly becoming the rule, not the exception, with independent research body Software.org predicting there’ll be more than 50 billion connected devices by 2020.
All these extra smart machines and gadgets will place huge demands on data centres – those terabytes of additional information must be safely stored somewhere. But with a creaking National Grid struggling from decades-long lack of investment, it’s not simply a case of increasing electrical capacity to meet these growing needs. The data industry is tasked with doing more with less, which is why improving energy efficiency will have an increasingly important part to play.
Datacentres consume electricity in two principle ways. Firstly, the energy needed to power all its ICT equipment and servers. Then there’s the vast sums of air conditioning necessary to keep those machines operating safely.
Significant progress has undoubtedly been made in cooling technologies. These advances are necessary too, what with air conditioning accounting for up to half a data centre’s total power use, depending on the size and climate. But those efficiency gains on their own won’t be enough.
The Move To Energy Efficient Modular UPS
Fortunately there are similar savings to be made through another indispensable element of a data centre’s infrastructure – its uninterruptible power supply (UPS) system.
Until recent years, data centre UPS’s were typically large, static systems only capable of optimal efficiency when carrying heavy loads of 80-90%. There was a tendency to oversize capacity during initial installation to provide the necessary redundancy, meaning that many power protection systems were wasting masses of energy by continuously running at low, inefficient loads.
Just as cooling equipment has developed, so too has UPS technology. Modular systems – which replace sizable standalone units with compact individual rack-mount style power modules paralleled together to provide capacity and redundancy – deliver performance efficiency, scalability, and ‘smart’ interconnectivity far beyond the capabilities of their predecessors.
The modular approach ensures capacity corresponds closely to the data centre’s load requirements, removing the risk of oversizing and reducing day-to-day power consumption, cutting both energy bills and the site’s carbon footprint. It also gives facilities managers the flexibility to add extra power modules in whenever the need arises, minimising the initial investment while offering the in-built scalability to “pay as you grow”.
Transformerless modular UPS units generate far less heat than static, transformer-based versions and need significantly less air conditioning too. They are also smaller and lighter, so have a significantly reduced footprint, and are easier to maintain because each individual module is ‘hot swappable’ and can be replaced as and when required without the whole system having to go offline.
Another benefit of the move to modular is that the units easily integrate with Energy Management Systems (EMS) or Data Centre Infrastructure Management (DCIM) software, transforming them into networks of ‘smart’ UPS’s that constantly collect, process, and exchange performance data including operating temperatures, UPS output, and mains power voltage.
This information is used in real-time to help constantly optimise the system’s performance, as well as highlighting other areas where additional efficiency savings can be made. In hyperscale datacentres where UPS’s can be spread across several sites in different cities or even countries, some in unmanned facilities, this connectivity combined with the ability to remotely monitor performance enables loads to be optimised, minimising the amount of energy wasted.
Modular UPS In Action: £335,000 A Year Electricity Savings
A prime example of the savings achieved through upgrading to modular UPS units can be found in one of our most recent projects. We teamed up with electrical contractors The Rosebery Group to completely overhaul the power protection systems at two datacentres belonging to one of the UK’s biggest consumer goods suppliers.
The existing system was originally installed in 2007 and consisted of large, static 400 kVA and 800 kVA units that were operating incredibly inefficiently on low loads ranging from 12-25%. Overall UPS efficiency averaged just 92% and was as low as 89% in the main switchroom, meaning vast amounts of energy were being wasted.
And because the units were so large and generating such vast amounts of heat, air conditioning costs were considerable. 414 kW of energy a year needed just for cooling, leading to annual bills of more than £315,000.
We replaced this dated and inefficient system with our transformerless modular Multi Power units. Configured to more closely match the power requirements of the data centres, UPS efficiency increased from 92% to 96% across all load levels.
The project’s overall cost and carbon savings have been substantial. The total outlay for running the UPS and cooling across both sites has been cut by a staggering £335,000 a year. Air conditioning requirements alone have been slashed by nearly 72%, resulting in annual energy savings of 297.3 kW.
The client benefits from overall annual energy savings totalling approximately 1.25 million kWh, enough to power 316 typical UK homes for a year. While carbon emissions across their two sites have decreased from 2,147kg to 603.5kg, a huge reduction of 71.89%.
All these environmental and economic improvements have been delivered in less then half the previously needed space, as the overhaul has resulted in a 59% per m2 reduction in footprint. Perfect proof that more can indeed be done with less – greater power density and better efficiency, using less electricity and space.
While this particular project obviously focused on a mega-sized datacentre, the lessons are clear for sites of all different configurations. The initial expense of upgrading a power protection system to a modular UPS will be paid back handsomely through markedly improved energy efficiency, leading to measurably positive impacts on both a facility’s corporate social responsibility obligations, and more importantly, their on their bottom line and day-to-day running costs.
With more than a decade’s experience in the critical power protection industry and a proven track-record in the datacentre sector, Chris Cutler is Corporate Account Manager for Riello UPS. He has particular expertise regarding large-scale 3 phase UPS installations and the topic of UPS energy efficiency.
Riello UPS Ltd is a leader in the manufacture of uninterruptible power supplies (UPS) and standby power systems from 400VA to 6MVA. The company is part of the Riello Elettronica group which has support offices in 80 countries.
Riello UPS products combine engineering excellence with high-quality performance and energy efficiency, to enable reliable power for a sustainable world. The product range includes 22 solutions for powering the smallest desktop PCs to the latest supercomputers used in advanced data centre operations.
The UK branch of Riello UPS is located in North Wales, operating from large, purpose-built premises comprising office and training facilities as well as a fully-stocked warehouse. This enables an end-to-end service of comprehensive technical support and fast product dispatch. For further information visit www.riello-ups.co.uk
Multi-cloud ambiguity can make enterprise security a daunting prospect. Technological misconceptions are rife, and inadvertently prompt the wrong cloud implementation decisions across the world. To avoid the proverbial data protection pitfalls and boardroom bust-ups, it is important to cut through the noise, explode the myths and unlock true cloud return on investment.
Many business leaders think the cloud is still in its infancy and it is too strategically risky to shift applications to public cloud environments. There is no option to solely rely on native app security from cloud providers as they are not responsible for application level controls. F5’s recent 2018 State of Application Delivery Report (SOAD) noted that EMEA businesses believe applying consistent security policies across all company applications is the “most challenging or frustrating” aspect of managing multi-cloud environments (42%). 39% said the biggest challenge is protecting applications from existing and emerging threats. Nevertheless, adopting a multi-cloud route does not have to mean compromising security. With advanced security solutions, businesses can safely move their applications to any cloud model that works best for their strategy without geographic or infrastructural constraints. According to SOAD 2018, 54% of EMEA businesses determine which cloud is best for each application on a case by case basis. This is fueling an uplift in multi-cloud environments with 75% of respondents claiming to use multiple cloud providers. Consumer demands and industry competitiveness continue to make the cloud an unavoidable option. The right deployment strategy makes it viable and safe.
The threat landscape is more sophisticated than ever due to volumetric attacks, malicious bots, and other tools targeting apps and sensitive data. Many traditional practices are no longer effective because they are too labour intensive and time inefficient to protect what really matters. This is where automation comes in to streamline and standardise IT processes, as well as remove human error. It also helps IT staff focus on other priorities, such as analytics and problem solving. According to SOAD 2018, over half (57%) of respondents indicated they are employing IT automation and orchestration to drive digital transformation efforts.
Myth-take 3: multi-cloud is only good for start-ups
Whilst born in the cloud organisations are disrupting markets, many long-established firms are investing heavily in the cloud to improve performance, achieve greater efficiencies and add value to the business. Consider Electrolux, which was recently recognised in EMEA’s Top 50 Cloud Climbers, a report on the top businesses in the region leveraging the cloud to fuel innovation. By connecting its factories to the cloud, it has achieved new levels of smart manufacturing that include real-time assembly-line monitoring and a newfound ability to adapt plans on the fly. HSBC is another recognised Cloud Climber, standing out for using cloud-driven machine learning and data analytics to analyse huge datasets and combat money laundering. Other Marquee names in the report include Spotify (for continually evolving and optimising its massive music streaming service), Mercedes F1 (cloud-fuelled performance analytics) and Airbus (cloud and machine learning to store and process several hundred terabytes of satellite imagery annually). The cloud can change the game for everyone. Too much apprehension is already becoming a major developmental handicap.
Myth-take 4: cloud security is not an executive decision
Leaving cloud security to the IT department alone is not a good idea. It is essential to bridge the gap between the executive boardroom and those responsible for security decisions. A recent F5-commissioned study among global CISOs by the Ponemon Institute, found that 58% of surveyed businesses had IT security as a standalone function, meaning most lack an IT security strategy spanning the entire enterprise. Only 22% said security is integrated with other business teams and 45% had security functions without clearly defined lines of responsibility. CISOs must be given greater voice in the boardroom and security-focused education initiatives must be structured to benefit the entire workforce. This will ensure app and data protection strategies are optimal, compliant and successfully meet customer needs.
Embracing modern application architectures, cloud models, and a wide diversity of devices means organisations of all sizes can better capitalise on the digital economy. Encouragingly, awareness is growing across industries for the need to expand protection efforts beyond the traditional network perimeter. A case in point is the rise in Web Application Firewall (WAF) deployments; SOAD 2018 reports that 61%organisations now use the technology to protect their applications.
Myth-take 6: the end consumer never sees the benefit
IT optimisation is based on having a cohesive, enterprise-wide strategy that enables new and existing services to be managed with greater control. Protecting end-users from identity fraud, loss of data and vital funds should always be a top priority. Consumers turn to the business they trust. Those with dubious data security track records are becoming obsolete.
Manage out the myths
More businesses than ever before are adjusting their security strategies to focus on securing the application and to find ways to innovate faster within the multi-cloud environment. Now is the time to mitigate the myths and manage out the misconceptions. By embracing modern application architectures and sustainable multi-cloud models, EMEA organisations will capitalise on the digital economy and drive greater profitability. Don’t procrastinate and confirm the myths - act today and become a legend.
The shift towards hybrid data center environments, consisting of a mix between off-premises services, public cloud and colocation, and privately owned, distributed IT facilities, is challenging traditional approaches to physical infrastructure management. A recent study by 451 Research brought some interesting insights to light.
The study was targeted at hybrid IT environments within large enterprises from across the globe, and conducted through intensive interviews with C-suite, data center and IT executives. Sponsored by Schneider Electric, the participants were drawn from companies generating over $500 million in revenue across the US, UK and Asia Pacific. The complete report provides an in-depth analysis of the topic together with and additional observations about trends that are emerging across multiple verticals and industries.
The interviews highlight how the widespread adoption of cloud services has significantly impacted the way companies are meeting their data center infrastructure requirements. These complexities will be compounded by an anticipated groundswell of new distributed IT driven by the Internet of Things (IoT) and emerging edge computing workloads.
Edge computing deployment presents unique challenges differing from those of traditional data centers. They are often remote and without local IT staff support. They require a different strategy as their lifecycle is longer and they must be easy to manage, secure and deploy while also being resilient.
To realize the full value of a hybrid approach, the management of a combination of data center environments has become one of the most complex issues for modern enterprise leaders. The study also revealed common themes:
According to the new 451 Research study, operators of enterprise data centers face a rapidly evolving technology landscape and a cloud-powered wave of disruption that is changing business models, connectivity and workload management. Driving this change is the growing availability and adoption of oﬀ-premises services, such as public cloud, colocation data center oﬀerings and Datacenter-Management-as-a-Service (DMaaS) like Schneider Electric’s EcoStruxure IT architecture
DMaaS enables optimization of the IT layer by simplifying, monitoring, and servicing data center physical infrastructure from the edge to the enterprise. Utilizing cloud-based software, it promises real-time operational visibility, alarming and shortened resolution times without all of the costs associated with deploying an on-premise DCIM system.
DMaaS is positioned in the broader context of IoT technologies and platforms, and its message aligned to resonate with a broader audience beyond the traditional data center manager. The ability to benchmark performance and set key performance indicators (KPIs) based on data center metrics will interest organizations in the midst of making decisions regarding data center and workload placement in an increasingly hybrid and distributed IT landscape.
Piloted in the U.S. DMaaS has already been used for benchmarking IT environments with more than 500 customers, 1,000 data centers, 60,000 devices and 2 million sensors. Customer feedback on the results of their implementations affirms the growing need for a cloud-based data center management solution.
451 Research concludes: “By 2019, organizations anticipate that just under half (46%) of enterprise workloads will run in on-premises IT environments, with the remainder oﬀ-premises, according to 450 enterprise respondents in our 2017 global study. Clearly, hybrid IT environments have become the norm.”
Vendor agnostic platform
Early adopters of DMaaS are already seeing results in their businesses and with their customers. Daniel Harman, Building Automation Systems Engineer at Peak10 + ViaWest, Inc explained why the cloud-based approach was the best data center management solution for his business. “ViaWest is trusted to deliver hybrid IT infrastructure solutions spanning colocation, interconnection, cloud, managed solutions and professional services to more than 4,200 customers. We chose a vendor agnostic DMaaS solution to provide one platform to monitor all the different devices in our data centers...”
Case Study – Bainbridge Island School District
Bainbridge Island School District chose DMaaS to help ensure continued availability of its innovative digital learning environment. With limited resources to manage its distributed IT and data center it provides one tap visibility to all device data, smart alarms and data-driven insights plus 24/7 digital monitoring and troubleshooting.
Network supervisor Alan Silcott says that DMaaS solutions “give me that peace of mind to know that if there is an incident, kids can continue to learn and the classrooms can continue to operate until the school day is through.
He goes on to say that DMaaS allows him “to check the status of all my data closets from my phone, at any time, in any location. It helps to know exactly where the problem is as opposed to trying to decipher it from a flood of emails.
“We have 11 buildings, 9 different schools with a data center of 35 different data closets. Technology is in every aspect of the schools. If our network were to go down for a day, it would cause serious disruption to our learning.
“If we get even just a power flicker, all of our UPSs send notification emails. It’s hard to sort through all those messages and make sure that everything comes back online safely.” Alan Silcott says that now when there’s an issue, the solution provider, “has our backs and contacts us immediately.”
Six Real-World Approaches to Managing Hybrid IT Environments – a report by 451 Research – can be downloaded from:
Small and medium sized datacentres are back in the spotlight as edge computing becomes increasingly popular to process latency-critical data on the periphery of the network. Although many of the principles for controlling temperature in industrial-size server farms also apply to decentralised datacentres, the latter poses particular challenges in terms of space and scalability. Matthew Philo, Product Manager CRAC at FläktGroup, discusses the key considerations when specifying a cooling solution for this environment.
Edge computing plays an important role in the quick transfer of data. Without edge data centres, many of the services we rely on each and every day would not run effectively.
However, these premises are often based in comparatively old, redundant, commercial spaces and it can be difficult to manage the climate and ensure that the servers have the correct conditions they need in order to operate.
But apart from sizing a cooling system to match the heat output and optimising energy efficiency, what other factors should specifiers consider when choosing a solution for this most tricky of environments?
Performance and reliability
IT downtime for any business can be devastating, so the reliability of edge computing is of critical importance. Losses created by downtime can have a catastrophic effect on business, both in financial and reputational terms.
As we look to the future we can only expect an increase in the widespread need for latency-sensitive data. Driverless cars, for example, will need constant access to a server in order to function correctly, something that will be vitally important for ensuring the safety of passengers and other road users.
Any cooling system recommended for this environment, then, must deliver the utmost reliability, ensuring a constant, suitable temperature throughout the data centre 24 hours a day, alleviating the risk of downtime and allowing access to the stored data whenever it is needed.
In fact, to provide redundancy, one or two additional units are often specified to provide back-up should any unit fail. The use of electronically commutated (EC) fans in both indoor and outdoor units, together with speed controlled refrigeration compressors, means that energy efficiency is much greater when operating at less than design loadings.
For this reason it is customary to keep any back-up systems online, which not only provides ‘hot’ standby, but also ensures extra units earn their keep by reducing energy consumption to a minimum.
When deciding which cooling system would be best to use in an edge data centre application, scalability and flexibility rank high on the list of priorities. The modular nature of these sites is usually dictated by the need to increase capacity in stages as further subscribers are signed up.
Often, edge data centres will be set up to be dynamic in order to meet changing requirements. Should the need increase for edge computing micro-centres in the north of England, for example, the company operating the data centre will simply shift stock IT hardware to the relevant sites in order to process data in a suitable location, while alleviating oversupply of equipment to quieter locations. When servers can be moved from site to site this quickly, it is important to have a cooling solution which can also be fully operational within short lead times.
All of this means that cooling solutions with large amounts of fixed site infrastructure are largely inappropriate for edge data centres. Instead, those that can be scaled up or down in a modular way should be considered, as well as units that are easily transportable so that they can be moved to key sites when necessary.
This way, the cooling equipment can be quickly deployed wherever it is needed, no matter how many servers the data centre owner has on site. For example, each of our Multi-DENCO units requires only a single set of small-bore refrigeration pipework between the indoor and outdoor equipment, similar in size to a domestic central heating system. This can be installed or modified very quickly and at low cost to achieve the required flexibility.
Fitting it all in
Lack of space is often an issue edge data centres, so any cooling systems must be carefully designed to ensure they can be moved to and will fit into the smallest of areas. Enough room also needs to be ensured for maintenance without sacrificing footprint for server racks and IT equipment. Often, the use of the smallest quantity of highest capacity units is not the best solution.
The space challenge is not confined to the interior; sufficient outdoor areas should be allocated for external units of a cooling system. If outdoor space is limited, manufacturers such as FläktGroup can offer compact solutions – for example, a hybrid unit which combines both the condenser and freecooling drycooler for heat rejection. Although slightly less efficient than having separate systems, this is often chosen when external space is at a premium, as it takes up no more room than a system without freecooling.
Setting up an edge centre comes with a plethora of challenges, with the stakes are high and little room for error. But by carefully considering the kind of cooling solution suitable for this environment, specification engineers can minimise the potential for downtime and provide the flexibility their customers need in this fast-moving industry.
For more information on FläktGroup, please visit www.flaktgroup.com
Businesses around the world are moving to the cloud to drive innovation and manage their data more efficiently, but the majority aren’t moving to a single cloud offering. Instead, they’re deploying applications and managing data across several, often disparate, IT environments – be it on-premises, private cloud or public cloud. More than 85 percent of enterprises are expected to adopt this ‘multi-cloud’ architecture by the end of this year, according to IDC.
By Jonathan Wisler, European Leader, IBM Cloud Infrastructure.
This multi-cloud model enables companies to benefit from the unique characteristic of each environment but also presents challenges. So how do organisations choose the best model, tie different environments together, and ensure effective data collaboration across the business?
Security still a major influence on cloud adoption
Recent research from IBM and Vanson Bourne has found that security is an ongoing concern when it comes to cloud adoption. The vast majority (95%) of surveyed IT and business decision makers reported that their organisation faced obstacles when implementing cloud technology, with over half (57%) reporting security in the cloud to be the most common obstacle.
While many organisations want the benefits that cloud can bring, security often remains a stumbling block. This is why some are looking at solutions that provide all the benefits of the cloud, but with data staying behind the organisation’s firewall.
Delivering on the promise of cloud, in the data centre
Depending on the sensitivity of the data being processed, organisations have varying needs when it comes to data isolation and this means they cannot always run everything in the public cloud. They want the functionality of cloud, but in their own data centre.
This is why they are looking to solutions such as IBM Cloud Private, which can be installed on a wide range of enterprise systems to create a private cloud with an architecture and capabilities consistent with the public cloud. It offers capabilities including rapid deployment, standardization and scalability plus ease of use and elasticity, but also provides greater control, increased performance, predictable costs, tighter security and flexible management options. By automating many of these processes you dramatically decrease human error. This type of platform is designed to make cloud capabilities more accessible and easier for businesses of all sizes, providing a flexible but protected environment for running innovative and demanding applications.
This combination of flexibility and control is particularly important to companies in industries such as finance, with high standards for security, compliance and reliability, leading to strong early adoption of this type of platform.
The question now is not ‘should we move to the cloud,’ but ‘how do we manage multiple cloud environments?’
For those organisations implementing a mix of public and private cloud solutions, orchestrating the different environments is crucial, ensuring end users have seamless and secure access to the data they need. So how can organisations harmonise and manage these multiple cloud environments, ensuring they are secure and in compliance with the latest regulation?
In order to harness data from a particular cloud and orchestrate its migration and accessibility within a multi-cloud architecture, secure, fast, interoperable data transport is essential. Data migration between clouds is being simplified with new tools such as IBM’s Application Transformation Advisor, which scans traditional applications and provides insights and guidance for modernising them on the cloud, and Cloud Automation Manager, which helps businesses deploy and run these modernised applications on-premises, or on the cloud of their choice. They can also control shadow IT by providing “self-service” IT models for end users that are compliant with the company business and security protocols. A new end-to-end development experience integrated into IBM Cloud Private also helps create, deploy and manage apps across different clouds.
Orchestration tools are also being developed to automate the provisioning of cloud services using policy-based tools, enabling configuration and deployment all from a single, self-service interface. This type of technology, such as IBM’s Cloud Integration platform, can provide a single control point to a portfolio of technologies (including API management, app integration, high-speed file transfer and secure gateway), connecting data and apps across different clouds, with the ability to be deployed on-premises or in private and public clouds. Whether managing communications between apps or transferring massive data files to the cloud, clients can access and combine different capabilities.
There is no question that cloud technologies are at the centre of digital transformation and unlocking the value of many sources of data. The challenge now is how we harmonise, automate and standardise multiple cloud environments in a secure and compliant way. Whether organisations are looking to implement a multi, hybrid, private or public approach – with new migration, orchestration and integration tools, the promise of cloud can now be unlocked for all.
Soon, data centres will need far more bandwidth than current infrastructure can provide. If specified and employed wisely, High Density can play an important role in realising this. And by designing high density infrastructure with the right kind of flexibility and reliability in mind (potential) bottlenecks and future limitations can be avoided.
By Andreas Rüsseler, CMO, R&M.
Over the next three years, consolidation, automation, and efficiency enhancements will change the industrial sector to an unprecedented extent. Data Centres will need to focus on higher performance network architectures and infrastructure consolidation to keep up with fast-growing demand.
Video (and HD in particular) are a key driver, forming over 90% of todays internet data traffic. The uptake of mobile computing and smartphones with increasingly high resolution cameras will keep data traffic growing. Cloud applications are another key drivers for bandwidth demand, with Cloud DCs already are responsible for 70% of IT work around the globe.
The Internet of Things is also on the rise with anywhere between nine and 25 billion internet-connected sensors, actuators and other devices constantly transmitting data, from wearables and thermostats to surveillance cameras, vehicles and robots. Several studies predict suggest there will be up to 30 billion things online in 2020. These changes will, in turn, change the way in which data centres are designed and operated. To accommodate the near future’s data demand, Data Centres need to enhance their infrastructures and provide networks that are available permanently.
Infrastructure in and outside the data centre will need to be more reliable, flexible, scalable, energy-efficient and offer more processing power. What’s more, the rapid growth in the volume of data being stored and managed in data centres means the largest possible port density needs to be realised in the smallest possible space.
Some important considerations
A higher density infrastructure normally also requires more energy and produces more heat, Racks can be become far heavier. Consolidating POP servers in a single rack unit leaves more space for switches and routers. Software Defined Network (SDN) architectures can be planned more sensibly. High density makes it possible to free up space for additional racks and switches and minimize the meet-me-room area.
Today’s ‘standard’ high-density fibre solutions for data centres offer up to 72 LC duplex ports per rack unit, but this can be very difficult to manage. Increased density can result in unmanageable cabling, severely hindering fault finding, cable tracking and Moves, Adds and Changes.
It’s worth researching and investing in solutions that have been designed with the next generation of High Density in mind. This should include dedicated racks, patch panel and connectors as well as an integrated hardware and software system that automatically detects when cords are inserted or removed. The entire infrastructure should be represented in a consistent, up to date database, offering precise, real-time information on the current state. Systems that trace and monitor all changes to a physical network, including switches, servers and patch panels, and offer functions for mapping, managing, analysing and planning cabling and cabinets, improve operational efficiency and facilitate passive infrastructure management.
Providing you think ahead when specifying and implementing a High-Density solution, it can often bring a lower cost per port than existing platforms, and provide a flexible upgrade path to accommodate needs for many years to come. Adhering to structured cabling standards such as TIA-942 also becomes easier, as do MACs, maintenance, increasing port density and improving organization of ports and cabling.
Data-hungry technology solutions are expanding at amazing speeds; however, the structured cabling backbone can’t simply be replaced every few years. Making smart technology choices today can accommodate the changing role and requirements of data centres now and in the future.
How businesses can avoid the most common errors in their journey.
By Toan Nguyen, Director Business Development & Cloud Platform at e-shelter.
Amidst growing demands for more versatile data storage and IT-Services, the success of public and private cloud platforms has paved the way for the next evolution of on-demand computing power in the form of the hybrid cloud. Theoretically it is the ultimate solution to modern infrastructure needs, allowing organisations to leverage the capabilities of public cloud platform providers, without offloading the entirety of their data to a public cloud data centre.
This balance provides a great deal of flexibility for businesses. It enables them to take advantage of the efficiency and innovation benefits that the cloud offers, while maintaining control over more sensitive data that may have stricter security or compliance requirements.
However, the decision to embrace hybrid cloud is one that should not be taken lightly. The journey is typically complex, with many overwhelmingly important decisions to be made in a seemingly short space of time. With there being so many options for partners and solutions it is normal for those unfamiliar with hybrid cloud to feel confused and apprehensive about what the best strategy is and how to start their route amidst so many possibilities.
Time and time again, we see businesses fall foul of the same highly-technical errors, which can generally be broken down into two major areas; the journey to the cloud and data security. Thankfully, the most common mistakes are also the easiest to correct.
The same-old pitfalls
When it comes to the cloud journey, one of the greatest mistakes we see is that companies often rush into a hybrid cloud solution with outdated infrastructure and are unprepared for the demands of a more modernised process. Usually, the decision to adopt a hybrid infrastructure is spurred by business objectives and not from an IT standpoint. For this reason, many businesses enter their own journeys without the right tools to support a major modernisation and cloud adoption strategy.
What we mean by this is that too often companies become hooked on the idea of hybrid cloud, without putting the processes in place to upgrade existing legacy IT systems and tools that have been optimised for traditional processes, rather than cloud-based infrastructures.
Another crucial error is that businesses so often fail to appreciate the scale of implementing the hybrid cloud and start off thinking they can do everything by themselves. By biting off more than they can chew, organisations can quickly find themselves and their existing infrastructures overwhelmed by the task at hand, and unable to leverage the full range of benefits that the cloud offers.
And even if these initial mistakes are avoided, data security all too often comes into play. When it comes to cloud computing, we must never stray away from the notion that at its core is data, and protecting this data should never be overlooked. However, businesses are generally turning to hybrid cloud to create a more robust and agile system, without giving data security and compliance the attention it requires. Far too often, businesses are too complacent about data-security and they fail to ensure that they are properly protected. With GDPR about to take effect across the EU and businesses being forced to analyse their data security strategies more than ever before, complacency simply cannot be tolerated.
To avoid making these mistakes, organisations first have to make sure that their business goals align with their IT aspirations. Before all else, they should clearly set out how the hybrid cloud will help to achieve key business objectives, and then work out how to provide their IT departments with the right tools and know-how to facilitate this advancement. Secondly, don’t be afraid to take things slowly. By jumping in at the deep end and taking on every aspect of the transition at once, businesses can easily become overwhelmed by the scale of transformation and risk slipping-up along the way.
Therefore, the best way to support a cloud set-up is to find the starting position that best suits a business’s specific needs. Give this the most thought, as it will have a major impact on the rest of the journey.
Once a clear strategy has been identified, the most effective way to eradicate mistakes and save precious time and money is to turn to already-established third-party data centre providers that offer the expertise required. These providers are designed to allow users to experiment with cloud strategies and, by exploring different solutions through a third-party in external data centres, businesses can identify what methods are the most suitable before making a major investment.
Finally, our advice to businesses with regards to data security is to have a complete view of the landscape. Don’t concentrate your efforts on one area, but instead take a holistic view and put the relevant underlying infrastructure and systems in place to recognise if somebody is trying to steal your data. This will provide a much more unified defence against cyber threats which will prove to be invaluable in the face of GDPR and sophisticated cyber criminals.
By following these steps, businesses will find themselves far more prepared for the new world of data storage and able to make the most of all the innovations that the cloud has to offer.
As trained technology professionals, we’ve become used to the idea that technology changes faster every year and the need to ‘minimise negative business disruptions’ is even more critical today, when more and more business transactions are relying on effective and efficient IT Services. Many years ago, I learned how difficult it was to be ready and able to support hardware, software and, most importantly, customers in the face of this rapid change.
By Gabriel Lopez, Program Manager – Global Service Quality, DellEMC.
To put this into context, in just a few decades, we’ve gone from mainframe to distributed systems to cloud computing. Now according to Gartner, the cloud market grew close to 20 percent in 2017. With digital transformation at the top of every executive's mind, it's likely that this trend will only accelerate. So much so that by 2020, Gartner estimates that the overall market will reach $411 billion, and IaaS $72 billion, 87 percent and 185 percent raises respectively from 2016.
When we see this rapid growth and the current compute power, storage quantity and networking capacity required to handle today’s daily business transactions, the numbers are really quite astonishing. However, most demand faster response times, more compute power, more storage, increased bandwidth and throughput, and much faster provisioning just to meet the most basic daily business needs.
Better, Faster and Cheaper
Better, faster, cheaper is and has always been the name of the game; no surprises there. Some organisations, though, can’t quite seem to focus on all three of these attributes at the same time. In my experience they tend to focus on just two, faster and cheaper, and disregard the better. But, can they really afford to just deliver two out of three?
I am surprised that many organisations pay little to no attention to their IT operations’ maturity level. Organisations large and small, new and not so new, are sometimes so entrenched in delivering the faster and cheaper that they forget that the better can significantly contribute to achieving the performance and cost efficiencies that we all seem to be chasing after and dreaming about.
Cloud computing is certainly not a new concept. The availability of today’s amazing compute power, paired with fantastic virtualisation solutions, represent key contributing factors to achieving faster and cheaper IT service. This is very evident with an efficient orchestration layer that automates provisioning by providing the end customer with a powerful and complete IT catalogue, at their fingertips, to meet their needs faster and cheaper than ever before.
But is it really fast and cheap?
But what happens when companies decide to invest in new cloud computing technologies to make their IT run faster and cheaper, but lack the backbone and processes to deliver better IT services? Even worse, what happens when they invest in cloud computing to migrate business critical applications, such as SAP, to a cloud environment without having the ‘better’ factor in place and actually end up doing damage to their business?
In these cases, the new solution fails to deliver any of the three desired attributes: It is not faster due to recurring service disruptions, is not cheaper due to the lack of service availability and is certainly not better because it ends up hurting the business.
The better factor is, in my opinion, critical for success in deploying new technologies such as cloud computing and something that I know the folks at Virtustream really subscribe too. This better factor I’m referring to is also known as maturity. The maturity level of your IT operations is a key factor in provisioning a fast and reliable IT service at the right cost, enabling your organisation to meet, and sometimes exceed, business demands.
Finding the better.
Whether you run your business on a private, public or hybrid cloud, increasing the maturity level of your IT organisation and engaging IT service providers, like Virtustream with a proven record of effective operational maturity is critical to achieving faster, cheaper and better IT services.
Reaching appropriate ‘Operational Maturity Levels’ – by yourself and in conjunction with an IT partners like Virtustream who focus on hosting mission critical applications in the cloud will save you money in the short and long term. You will enjoy the benefits of a proactive support organisation that will:
What can you afford?
So, can you really afford not to focus on the better factor and do you still believe you’ll be able to deliver IT services faster and cheaper without it? Can you afford to not reach an appropriate maturity level in your IT Operations? Can you afford to hire vendors who are lacking operational maturity?
If you think you can, my advice to you would be to take a closer look at your bottom line, particularly around hidden costs such as project delays, loss of business, and loss of potential business. It should become apparent quickly that fast and cheap will not deliver over the long term without the presence of better. It makes the most sense to partner with a vendor like Virtustream who can prove to you how mature their operations are and who will willingly discuss their operational best practices. Include certain maturity level requirements (for whatever level is appropriate for your organisation) in your future RFPs before hiring new vendors and make operational maturity a prerequisite. This will not only improve your overall services but will also contribute to increasing the maturity level of your own operations at a much faster pace.
With data centres increasingly vital to modern business, the cost of a power failure can be enormous. Protecting a company’s investment requires the latest technology. By Saft.
For 75,000 passengers stranded around the world by a British Airways IT problem in May 2017, the cause of the disruption probably didn't matter much. They just wanted to reach their destination. But for the airline, it was vitally important to discover what lay behind the outage that resulted in the cancellation of 726 flights, a loss of $108m and untold damage to the brand’s reputation with customers and investors. The problem was traced to one of the airline’s data centers.
In August 2016, US carrier Delta Airlines had a similar problem. Its power outage, which took three days to resolve, led to the cancellation of 2,300 flights and cost $150m. In September 2016, an outage at the Global Switch 2 (GS2) data center in London lasted for less than a second but still took one customer offline – GS2 didn’t say what industry they were in – for two days.
Whether for processing customer information, running e-commerce sites or providing employees with access to cloud computing, data centers are vital to modern businesses. Almost every function of the digital world is dependent on them.
The cost of downtime is estimated at $9,000 per minute and, as the GS2 example demonstrates, even a blink-and-you’ll-miss-it power problem can cause significant downtime. With the data-center market becoming ever more important to modern business and society, these risks will only be more acute in future.
Reliable backups are key
Data centers are power-hungry creatures. It’s estimated that worldwide they consume around 500 terawatt hours (TWh) per year – slightly more than the UK’s annual energy consumption and around two percent of worldwide greenhouse gas emissions, roughly equivalent to the emissions of the airline industry. And most of that energy doesn’t go on powering the servers; instead it’s needed for cooling – both of the servers and the power supply itself.
With all this in mind, it’s vital to have a reliable backup system in place to reduce the risk of outages. Power problems will happen, but a well-designed UPS will ensure seamless operation by drawing power from a battery until the backup generator is operating.
Power-supply technology and systems design may have moved on, but many data centers have not. Garcerán Rojas, chairman at PQC, a Spanish data-center engineering firm, argues that power outages in data centers are often caused by outdated design of backup systems.
One example of changing technology can be found in the kind of batteries used in the UPS. Most of the industry still uses lead-acid batteries, which are big and heavy and need a lot of cooling. Accommodating them affects everything from cooling costs to the amount of space the data center needs – and even whether the floor should be reinforced to support the weight.
However, the latest lithium-ion (Li-ion) batteries offer greater power density, so they can be up to three-times more compact and six-times lighter. They also last longer than lead acid – up to 20 years, compared with four to six years. And because they work in higher temperatures, they require less cooling. That lowers costs and frees up more space in the data center for servers.
Li-ion is more flexible, too. The smallest lead-acid option provides five minutes of backup power until the generator takes over. For many applications that’s more than necessary and thus a waste of money. Li-ion can cover smaller gaps.
Batteries are getting smart
Another big benefit is that Li-ion batteries can be ‘smart’. It’s impossible to know when a lead-acid battery will fail. A data center might find out a failure has occurred only when backup power is needed, resulting in the kinds of problems that befell Delta and BA. Operators either have to accept that risk or invest in further redundancy.
Li-ion batteries, in contrast, are self-monitoring, so data-center operators are constantly aware of the battery’s health and don’t need to waste money replacing it too early. Saft’s Flex’ion Li-ion batteries are self-powered, meaning they can monitor their condition even without a power supply.
Of course, the battery is just one part of the UPS and the UPS is just one part of maintaining a reliable data center. However, as businesses rely on data centers more and more to support their digital operations, it’s vital that the underlying power system can be counted on in an emergency. The financial and reputational cost of failure is simply too high.