Alexa, please will you run my data centre as efficiently as possible, bearing in mind the constantly fluctuating workload, the regulatory environment, the current energy price, the outside temperature, the new range of Managed Services being launched today, not to mention the cost of property rental, the geographical location of all the company’s employees who will need to be accessing various applications and datasets…?
As I’ve written before, persons under a certain age might find the above, somewhat exaggerated Digital Age scenario exciting and/or believable. The older generation might quake at the mere suggestion that robots can take over the workplace to the extent that the human workers, if there are any left, will be doing the menial stuff.
Long term, it seems that the only decisions to be made are just how much automation is possible and, just as importantly, acceptable? The finance sector is already well on the way towards digitalisation, and one can foresee a future where we all do our own banking, and there’s only virtual money, and the banks employ robots and computers only – except, maybe, for the board of directors.
And how many other industries will, ultimately, lend themselves to such a business model? Put it another way, how many industries, or specific jobs, cannot be completely automated? Given the appropriate level of artificial intelligence, it’s more than possible that a robot judge will preside over robot prosecutors and defence lawyers, who have the whole history of legal precedent stored in their memories upon which to call, as they seek to make their case for their human and, no doubt, robot clients.
In terms of transport, the automated future has all but arrived – it just needs a bit of tweaking before it goes mainstream. Shopping is half way to being a totally virtual experience – and given that the online experience, making full use of virtual reality, will eventually replace the need to touch, pick up, smell etc. anything that we might want to purchase, one can see a time when the high streets will be empty.
Eventually, we’ll all be living in a virtual world, where a ‘real life’ experience is the exception, not the norm.
For now, here’s wishing you a happy festive season, and enjoy some very real food, drink and relaxation before 2018 looms large!
Continued strong demand driving investor appetite in Europe.
The data centre sector is seeing record investment levels as investors seek exposure to the record market growth in Europe. This investment is driven by take-up of colocation power hitting a Q3 YTD record of 86MW across the four major markets of London, Frankfurt, Amsterdam and Paris, according to global real estate advisor, CBRE.
There is also a substantial amount of new supply across the major markets as developers look to capture demand in a sector where speed-to-market is still key. The four markets are on course to see 20% of all market supply brought online in a single year. This 20% new supply, projected at 195MW for the full-year, equates to a capital spend of over £1.2 billion.
London has been centre-stage for European activity in 2017, its 41MW of take-up in the Q1-Q3 period represents 48% of the European total and dampened any concerns over the short-term impact of Brexit on the market. London was also home to two key investment transactions in Q3 as Singapore’s ST Telemedia acquired full control of VIRTUS and Iron Mountain acquired two data centres from Credit Suisse, one of which is in London.CBRE projects that the heightened market activity seen so far in 2017 will continue into Q4 in three forms:
A continuation of strong demand, including significant moves into the market from the Chinese cloud and telecoms companies. Further new supply; CBRE is projecting that 80MW new supply will come online in Q4, including several brand-new companies such as datacenter.com, KAO and maincubesOngoing investment activity, with at least one major European investment closed out by the year-end.
Mitul Patel, Head of EMEA Data Centre Research at CBRE commented: “2017 has been a remarkable year for colocation in Europe and, with 2018 set to follow-suit, any thoughts that 2016 might have been a one-off have been allayed. We have entered a ‘new-norm’ for the key hubs in Europe, where market activity is double what we have been accustomed to in the pre-2016 years.
“Given this ongoing market activity, it is no surprise to see so many investors wanting a piece of the action in Europe. As demand for data centre capacity continues to entice investors, the pool of available companies and assets diminishes. Consequently, those looking to deploy capital in Europe will need to act decisively, leading to more M&A investment in the coming year, beginning in Q4.
“Furthermore, the low cost of capital available to large data centre developers, and a shift from private equity to more longer-term institutional and infrastructure investors, will mean that both investment volumes and prices paid will remain at historically high levels.”
The need for high-speed data transmission and increased data traffic in cloud computing have enabled convergence of complementary metal-oxide semiconductors (CMOS) technology, three-dimensional (ED) integration technology, and fibre-optic communication technology to create photonic integrated circuits.
In the near future, by leveraging CMOS technology, the silicon medium has the potential to be fabricated and manufactured on a much larger scale. Some of the most disruptive innovations in silicon photonics are high-speed Ethernet switches, interconnects, photo detectors and transceivers, which enable high-bandwidth communications at a lower cost through low form factor, low power generation and increased performance integration into a single device.
Frost & Sullivan’s new analysis of “Innovations in Silicon Photonics” finds that the North American region has seen significant growth in silicon photonic research and development (R&D) due to the location of hyper-scale data centre facilities, while Asia-Pacific has witnessed investments to improve methods for large-scale manufacturing of silicon photonic components and circuits. The study analyses the current status of the silicon photonics industry, including factors that influence development and adoption. Innovation hotspots, key developers, growth opportunities, patents, funding trends, and applications enabled by silicon photonics are also discussed.
“Currently, innovations in silicon photonics are driven by the convergence of optical and electronic capabilities on a single chip. The innovations are highly application-specific, focusing on high-speed optical communications,” said Frost & Sullivan TechVision Research Analyst Naveen Kannan. “Further research and investments are looking towards developing next-generation, high-speed quantum computing. Researchers have transformed high-speed computing by achieving quantum entanglement using two quantum bits in a silicon chip. This will enable high-speed database search, molecular simulation, and drug designing.”
Wide-scale adoption is expected in various industries, such as data centres, cloud computing, biomedical and automotive. Building low-power interconnects that use light to transfer data rapidly is the main application area within data centres. In the biomedical industry, silicon photonics will enable the creation of highly sensitive biosensors for diagnostic applications.
“Photonic integrated circuits require the designing of photonic components simultaneously with electrical and electronic components. This can be challenging,” noted Naveen Kannan. “Players can overcome this challenge by offering services in terms of developing innovative photonic integrated circuit design, product prototyping, and testing methodology as per customer requirements.”
There is no better way to understand trends in any industry than by going directly to those leading the way in that space. With this in mind, Schneider Electric commissioned a new 451 Research study designed to shed light on hybrid IT environments at large enterprises from across the globe. This research was conducted through intensive, in-depth interviews with C-suite, data centre and IT executives and offers peer lessons and perspective into innovative deployment of technologies others in the industry can use to evaluate and manage their own hybrid landscapes.
Through these interviews a few key points came to light:
As cloud services are deployed, there are ripple effects across the organization. There is a significant shift in business models, while greater demand is placed on connectivity and workload management. To realize the full value of a hybrid approach, the management of a combination of data centre environments has become one of the most complex issues for modern enterprise leaders, forcing them to re-consider strategy and common practice.
The study also reveals that while the experiences, strategies and innovative technologies used varied greatly, there were clear common themes:
In one use case, a U.K.-based retailer found that when determining the best venue, security, performance and latency requirements needed to also be factored into the total cost analysis, along with data transit and application license costs. These vectors combined helped to determine the right mix of colocation and public cloud computing infrastructure to support connectivity needs while also controlling costs.
As one study participant noted, “If I had my dream, we would have the visibility to predict failures before they occur.”
“The biggest challenge of all is cost control, and management and financial applicability of capitalization,” noted one U.S. retailer.
North Denmark Region data center has been Tier IV certified for its constructed facility by Uptime Institute, the globally recognized data center authority. This Certification is testament to the focus by the organization on building a highly resilient data center that ensures availability of critical systems that benefit patients in the form of supporting stable hospital operation in a secure environment.
The data center is the first hospital in the world, to receive this certification of the highest standard - Tier IV, for both design and construction of the data center. It joins a group or 99 data centers in the world with a Tier IV design certification, and one of only 42 being subsequently awarded the Tier Certified Constructed Facility certification. As the global data center authority, Uptime Institute has certified over 1,200 data centers across 85 countries.
"IT is a vital part of the operation of a modern health system, so that patients can be treated in a timely and proper manner and have their health data safely stored. We are therefore immensely proud that we have become part of the exclusive club of only 42 companies in the world that achieves this certification” said Klaus Larsen, CIO, Regional Nordjylland
North Denmark Region data center houses more than 450 systems including 911 ambulance services, patient healthcare records, decision support systems and other critical systems supporting over 15,000 employees. Supporting a regional population of 600.000 any downtime in the data center can have major consequences for both hospital operation and the other healthcare operations in the region. Therefore, availability and stability of the data center is of the highest priority.
“North Denmark Region is a forward-looking organization, focused on protecting the public by improving the provision of healthcare in the region, using technology which is underpinned by their investment in a highly resilient and robust data center, said Phil Collerton, Managing Director, EMEA, Uptime Institute. “Regional Nordjylland has shown foresight by designing and building the first Uptime Institute Certified Tier IV data center in the region. Their focus on achieving the Tier IV Certification for both the design and constructed facility means that the data center is ready and able to support the health service for future generations.”
Unique solutions delivered by in-house employees
The planning, design and development of the data center was completed in partnership between the employees of the North Denmark Region IT department and technical department at Aalborg University Hospital. This means that the data center was fully designed by North Denmark Regions own employees. This is testament to the fact that the employees have excellent knowledge and competencies that are among the best in Denmark.
The partnership has resulted in innovative solutions that ensures data center resilience an application availability. For instance, the North Denmark Region data center is now the only Tier IV data center in the world that uses mechanical systems and variables such as weight, temperatures and pressure to automate outage responses when issues are detected. In addition, North Denmark Region is one of select few data centers in the world that have been certified while operational.
“We are part of an elite group in the world that has become Uptime Institute Tier IV certified, while the systems are in operation, and while building and testing”, says Michael Lundsgaard Sørensen, head of IT management, operations and support.
Eric Maddison, Senior Consultant at Uptime Institute, has participated in over 30 Tier Certifications of Constructed Facilities (TCCF), and regards North Denmark Regions TCCF as one of the best prepared organization he has worked with.
“It is a great achievement to upgrade an existing data center to the demanding standards, which an Uptime Institute Tier IV certified data center requires,” said Eric Maddison, Senior Consultant at Uptime Institute. “It is the fastest execution of a Tier IV certification, I have personally witnessed. It includes an interesting use of industrial inspired control systems as opposed to a Buildings Management System (BMS) and Programmable Logic Controllers (PLC).”
New facility supports teaching and research activities and meets strict sustainability criteria.
Italtel – a leading telecommunications company in IT system integration, managed services, Network Functions Virtualization (NFV) and all-IP solutions – has designed and deployed a new green data center at the University of Pisa, one of Italy’s oldest universities.
Working within a Raggruppamento Temporaneo di Imprese (RTI, a temporary group of companies) with West Systems and Webkorner, Italtel built the data center within the Parco Regionale Migliarino San Rossore Massaciuccoli. This is linked to the university by a fiberoptic ring, guaranteeing a high level of connectivity and reliability.
The data center has been built with best-of-breed technologies, including racks and air conditioning technologies with a guaranteed level of quality and efficiency, bringing a Power Usage Effectiveness (PUE) value of 1.17. This is even lower than the requirement of 1.3 needed to certify a data center as environmentally friendly and brings additional benefits such as reduced running costs.
“Because of their requirements for continuous power and air conditioning, data centers are traditionally associated with very high levels of energy consumption,” said Fiorenzo Piergiacomi, Head of Public Sector Account Unit at Italtel. “By choosing to create a green data center, the University of Pisa has reduced its environmental impact while also optimizing the available space and cutting ongoing maintenance costs.”
One of the key criteria for the University of Pisa when selecting its new technology infrastructure was that it meets its strict sustainability goals. Data centers are huge consumers of energy and having one that meets PUE targets while still reducing costs provided an ideal solution for the University.
The project was implemented by Italtel in just eight months, despite the infrastructure challenges posed by converting part of an existing building into a reliable and technologically-advanced data hub.
With AWS adding 53,000 SKUs in the last two weeks, analysts predict the rise of cloud dealers with simple, fixed-price offerings aimed at untangling this complexity.
At AWS re:Invent, 451 Research is revealing how quickly enterprises are moving to hybrid and multi-cloud environments; the growth of the cloud market to$53.3 billion in 2021 from $28.1 billion this year; and the impact of cloud service providers’ ever-expanding portfolio of offerings.
451 Research’s most recent Voice of the Enterprise: Cloud Transformation survey finds that cloud is now mainstream with 90% of organizations surveyed using some type of cloud service. Moreover, analysts expect 60% of workloads to be running in some form of hosted cloud service by 2019, up from 45% today. This represents a pivot from DIY owned and operated to cloud or hosted third-party IT services.
451 Research finds that the future of IT is multi-cloud and hybrid with 69% of respondents planning to have some type of multi-cloud environment by 2019.
The growth in multi- and hybrid cloud will make optimizing and analyzing cloud expenditure increasingly difficult. 451 Research’s Digital Economics Unit has analyzed the scope of AWS offerings and reveals that there are already over 320,000 SKUs in the cloud provider’s portfolio. This complexity is likely to increase over time – in the first two weeks of November 2017, for example, AWS added more than 53,000 new SKUs.
“Cloud buyers have access to more capabilities than ever before, but the result is greater complexity. It is a nightmare for enterprises to calculate the cost of computing using a single cloud provider, let alone comparing providers or planning a multi-cloud strategy,” said Dr. Owen Rogers, Research Director at 451 Research. “The cloud was supposed to be a simple utility like electricity, but new innovations and new pricing models, such as AWS Reserved Instances, mean the IT landscape is more complex than ever.”
Flexibility has become the new pricing battleground over the past three months, with Google, Microsoft and Oracle all announcing new pricing models targeted at AWS. Analysts believe there will be a market opportunity for cloud dealers that can resolve this complexity, giving users simple and low-cost prices – similar to how consumer energy suppliers abstract away the complexity of global energy markets.
451 Research’s quarterly Cloud Price Index continues to track the global cost of public and private clouds from over 50 cloud providers.
Cloud market growth
The latest data from 451 Research’s Market Monitor finds that the cloud computing as a service market is expected to grow 27% to $28.1 billion in 2017 compared to 2016. With a five-year CAGR of 19%, cloud computing as a service will reach $53.3 billion in 2021.
The report examines revenue generated by 451 global cloud service providers across infrastructure as a service(IaaS) and platform as a service (PaaS), as well as infrastructure software as a service (ISaaS), which includes IT management vas a service and SaaS storage (online backup/recovery and cloud archiving).
The report predicts that IaaS will account for 57% of cloud computing as a service revenue in 2017.
451 Research analysts forecast that ISaaS will see the fastest growth through 2021 with a 21% CAGR, while Integration PaaS will be the fastest growth sector within the PaaS marketplace with a five-year CAGR of 27%.
70% of organizations surveyed have a Digital Transformation strategy, but only 10% have a full deployment plan.HCL Technologies has released the findings of an independent–research study of senior business and technology decision–makers regarding digital transformation at large global enterprises. The global CXO survey highlights a wide gap between strategy and execution with organizations’ digital transformation initiatives. These findings come at a time when digital transformation has emerged as a defining strategy for modern global enterprises.
Will it be possible, at some time in the not-too-distant future, for enterprises, colocation companies and cloud service providers to dispense with all the heavy infrastructural gear of the mission-critical datacentre and operate with lightweight distributed IT? How feasible is it to rely on emerging technologies that dynamically replicate or shift workloads and traffic whenever a failure looms?
By Andy Lawrence, Executive Director at Uptime Institute Research.
This prospect has been tantalising many in IT for a decade or more, and for a few, it's a reality – of sorts. Big cloud service providers, in particular, have long boasted they can rapidly switch traffic between sites when problems arise, and so they build datacentres that are optimised for cost, not availability. In their environments, they say, developers needn't care about failures. That is all taken care of.
Meanwhile, some operators and enterprises have replaced their expensive and mostly dormant DR sites with subscriptions to cloud services; others have replicated their loads across a distributed fabric of sites that are always active, and which can fail without great consequence.
All of this points to a potentially big and disruptive change in the areas of physical infrastructure, datacentres, business continuity and risk management. But as ever, the hype can screen the reality: research conducted by 451 Research and Uptime Institute suggests that although distributed resiliency is likely to be an increasingly used, and even dominant, architecture in the years ahead, it is also proving to be complex and demanding.
The engineering diligence that is so key at the mission-critical infrastructure layer doesn't map easily onto the web of services that are the foundation of modern architectures.
Our research identified four levels of resiliency. These are
These are not necessarily alternatives – prudent CIOs will likely find themselves using some or all of these approaches. Our study found that each of these approaches introduces new layers of complexity, as well as the promise of higher availability and greater efficiency.
Critically, the report suggests that for new cloud-native applications, it is trivial to take full advantage of distributed resiliency capabilities – not because resiliency is trivial, but because the cloud provider has made the investment in redundancy, replication, load management and distributed data management.
But for most existing applications, including many that are cloud-optimised rather than cloud-native, it is much more complicated. Some applications need rewriting, some will never transfer across, and several factors such as cost, compliance, transparency, skills, latency and interdependencies add complexity to the decision. For these reasons, complicated hybrid architectures will prevail for many years.
There are more than 20 technologies that may be involved in building truly or partially distributed resiliency architectures.
Among our key findings are:
The report concludes that the use of distributed resiliency and a complex, hybrid web of datacentres, distributed applications, and outsourcing services and partners will be problematic for executives seeking good visibility and governance of risk. Outsourcing can mean that cloud providers have power without responsibility, while CIOs have responsibility without power. This is creating a need for better governance, transparency, auditing and accountability findings.
Collaboration is the buzzword in business today and high on the topical agenda. The market for better business communications has shifted dramatically in recent years, thanks to advancements in technology. Collaboration tools and platforms have greatly improved, allowing us to be more productive and enjoy a more flexible way of working.
Research and advisory company, Gartner, predicts that by 2025 “The changing nature of work and people’s digitally intermediated perceptions of connected spaces mean IT leaders must reconceptualise the 2025 workplace as a smart, adaptable environment that conforms to workers' contexts and evolved job requirements’.
The benefits of heightened collaboration within a company are endless, especially when it comes to enhancing the customer experience. Collaborative features such as instant messaging, video conferencing, screen sharing, Voice over IP, and shared cloud document management allows a business to communicate more effectively between locations, departments (virtual or otherwise), and people.
But effective collaboration can also flag major security concerns, especially as many data breaches within organisations are caused internally. And with the General Data Protection Regulation (GDPR) set to come into effect in May 2018, it’s imperative for businesses to ensure that their collaborative systems are robust. The more we collaborate, the more we share and so data security is a business-critical priority.
As employees and organisations are seeking ways to collaborate more and more, how can you fully harness the power of collaboration in the cloud while being fully protected?
1.Invest in the right software
When implementing new technology, security remains a key priority. New collaborative
tools feature secure and safe document handling with security layers to keep your business
safe, and the ability of managing access permissions for safe sharing and handling documents from remote or out-of-office locations. This allows an organisation to take advantage of new collaborative technology without the risks of sensitive data or important information being wrongly shared.
The laptop your best sales exec misplaced on the tube after a busy day of client meetings? A remote wipe and user access reset will avoid you worrying about who has access to that data now. It won’t be cheap but it will be worth it. Select collaboration tools carefully, ones that promise optimal security and that are user friendly.
2. Protect your information
To protect company information, data can be encrypted so that only authorised parties can access it. In addition to this, data can be assigned usage rights, and be given
embed classifications that follows the data no matter where it goes. This ensures
that shared corporate information never reaches the wrong hands and is constantly
protected. Data security starts with 256-bit file encryption. File encryption is required for all files that are stored on the vendor servers and in transit and acts as a protective shell around a file so that a hacker cannot view the contents if they intercept it at any point.
3. Define access
In order to give a business complete control, a company can define specific access
controls using an access management system ranging from the main datacentre to the cloud. Through this, access can be given at different levels of the company, or certain permissions can be given for important documents. For example, some team members may be able to edit, but not share documents, or some may be able to do both. This feature is great for complete control over what happens to important data.
4. Direct management, on the fly
A business can secure their applications and data in one simple click. With collaborative
security controls, security features can be added to documents no matter where
they are, and are covered instantly. This means if something needs to be changed quickly, a business is fully capable of doing so.
5. Gain visibility
A business can use data protection to track shared data, and see who accesses important information, and where it is sent to. This means that access can be given or taken away easily, with complete visibility for the business.
6. Understand the benefits of using an MCSP
Working with the right MCSP (Managed Cloud Service Provider) will help optimise use of collaborative communications and increase business efficiency, allowing you to focus on your customers. The right MCSP will provide dedicated servers in a highly secure data centre, ensure that you stay on top of legislation with fully protected and robust technologies and help manage client and team activity in a secure, shared and collaborative space.
7. Educate your team
Understanding data security and ensuring software maintains the highest level of data is imperative. Therefore, it is essential to ensure that all active users in the organisation are regularly trained on how to keep data safe while enjoying the full benefits of collaboration and that they understand that safe and secure practices must be adhered to, such as password changes and social media processes.
Through continual learning, development and education, organisations can find the right balance between meaningful collaboration in the workforce through the use of technologies and the necessary business requirements of maintaining IT security, data, privacy, and business sensitive information.
By Ian Bitterlin, Consulting Engineer & Visiting Professor, Leeds University
The hype-cycle is a curve that looks like the outline of a dromedary camel’s back and neck with time as the horizontal and hype as the vertical axis. A new product idea starts just behind the base of the hump and the hype surrounding it quickly rises to the peak of the hump. At that stage reality starts to impact on the hype – maybe the claims for greatness are seen through as just a fad or the cost of adoption looks too high – but the hype starts to reduce and quickly slides down the hump to reach a trough at the nape of the camel’s neck.
At this stage in a products life only two things can happen: It either fails to regain the hype, stops being talked about and disappears forever (sometimes to return when enough of the potential market has forgotten about its first incarnation) or the product hype recovers, and the product succeeds – this time in a steady upward adoption stage following the hype-cycle curve ever upward.
I think that in 2017 we have seen two products in the data centre arena that are at two different positions on the hype-cycle and deserve comment.
First, we have Data Centre Infrastructure Management, DCIM. Touted to be a ‘single pane of glass into the data centre environment’ it can be little more than the combination of a BMS, EMS and an asset manager. A single screen with pretty pictures, dials and whistles capable of storing and processing vast quantities of data, eventually issuing charts and information that enable the user to make decisions. In terms of the hype-cycle it looks like DCIM is well over the initial hump and is rapidly reaching the critical trough – if not already there.
Promising everything at first, including time-saving in not having to manage assets on spreadsheets and operating a central alarm and event monitoring the peak of the hype was reached quickly – pushed along many vendors who had invested great big chunks of cash in software development – we ended up with two distinct DNA sources. One was those OEMs that came from BMS and added an IT hardware asset manager and the other was the inverse, the IT asset manager that had BMS functionality added to it. The vendors paid the market reporters to review their products and we had a whole string of ‘Best DCIM in the World’ gongs issued. But very few customers stepped up in relation to the investment made by the OEMs – so the hype machine was fed more hype. When the reality stepped in the product hype took a sharp downturn. That reality included realization of the high implementation cost, intangible benefits (DCIM doesn’t ‘manage’ it only reports), incalculable Return-on-Investment and growing realization that in general you still needed to install a functional BMS with a degree of system control. DCIM didn’t appear to replace anything and quickly became seen as a ‘very-nice-to-have’ but a real luxury. It also (still) has one commercial problem which doesn’t aid its adoption – no one has worked out who to sell it to and when. In the project timeline of a data centre all of the things that a BMS controls (power & cooling etc.) are up-front via main and specialist contractors who don’t want to talk about DCIM because it is seen as part of the ICT system and infrastructure. By the time the data centre is finished the IT system integrators don’t want to talk about DCIM as it isn’t in the budget and is seen as part of the M&E infrastructure. The target market and the ideal time to hit it has been new territory for newly converted BMS vendors to get their heads and sales budgets around. So, will DCIM pull out of the hype-trough and climb the general adoption path? Possibly but not likely unless the costs are substantially lower than today, OR a return-on-investment is proven in terms of cash not reports. The cost may have to be reduced by more than 50%, probably more like 60%, which will lead to standardized products and plenty of vendor/market consolidation. A meaningful RoI will be even tougher to create than the market price reduction will be to swallow for the vendors.
There is one chance that might save DCIM or reincarnate it as IDCIM (Intelligent DCI Manager) where the data centre is actually controlled based on the ICT load that is flowing in-out - but that will take another chunk of investors cash.
The second product that is on the hype-cycle is Lithium Ion battery cells. It hasn’t quite reached the top of the hump, but the hype is everywhere. So, what will happen at the peak? Well, so far, the hype is based on product attributes that are either not useful, are distorted versions of the reality or even hidden from view by a smoke screen:
So, will Lithium-ion replace VRLA? I doubt it very much. The price difference doesn’t warrant either the risk of explosion nor the lack of proven reliability. Yes, they will find huge applicability in cars and consumer goods and probably small UPS <10kVA but, I think, they will slide down the large UPS hype-cycle and not recover.
Steve Hone CEO Data Centre Alliance and Guy Willner, CEO of IXcellerate talk Personal Data Protection, Information Laws and GDPR
Steve Hone: 1. It is believed that data privacy is one of the hottest topics in all industries across the globe. Referring to international news, we can see that the US and Europe have different ideas about data and its privacy. Why do we need it and how can it be realised and localised?
Guy Willner: The answer is simple: each country, each government is responsible to its citizens. For the personal information of its citizens, it must put some regulation or control in place so that that data doesn’t go to the wrong people. If that information is sent out of the country, then the government doesn’t have control over what is happening, so there is an element of obligation from each jurisdiction to take responsibility for the security of its citizens and information.
2. How does data privacy and sovereignty influence international business across the world?
Until recently we thought that globalisation of the Internet meant providing a global service from one particular location. What we see now with Russia’s information law, GDPR in Europe or the Australian and German regulations, is that all information providers have to adopt and establish platforms within each country for the two hundred or so countries around the world. There are already up to 25 countries that have particular local data regulations and that will continue to increase. It's something that the larger global information providers and anyone starting a business on a global level now need to include in their technical and legal compliance strategy.
3. The media is talking about General Data Protection Regulation (GDPR), which is coming into force in May 2018. This is expected to create a new set of regulations, which are aimed to protect EU citizens from privacy and data breaches. What do you think about it?
The European Union with its 27 countries is beginning to flex its muscles and to have some influence over what happens to this information, so this is a good example. It seems that GDPR is many times more expensive than the equivalent in Russia. In the European Union, I believe it can cost you 4% of the global turnover of the company or 20 million euros. So it’s a large stake that the European Union is putting in place and no doubt it will be fantastic for an IT industry as their workout solution to implement this.
4. Having experience with IXcellerate and Equinix, what role do Data Centres play in the world currently?
Regarding the information law or general data sovereignty, the data centre is the foundation on which that is built, because the data is going to be stored physically somewhere. When you are looking at the optimised strategy, you are not going to put that information under the desk in the office - you need to put it in a highly secured, well-managed facility. Hence, the data centre is a natural home for this type of information, and if we look at the Russian market, there are about five or six credible data centre companies, they are all accepting many new customers who are coming in to localise their data in Russia. It is a big part of the data centre industry now.
5. How do you (IXcellerate) guarantee security?
At IXcellerate we are looking after our customers’ racks, we do not provide the service, hardware, switching, etc., we are looking after our clients’ service, hardware, and switching. As a result, the most important aspect for us is physical security, which means we have five levels of protection before you get real access to the server rack. We also have a lot of preventive-defensive measures regarding the monitoring of movements on or near the sites, the identity checks, etc.
With five levels of physical security, we can offer assurance to larger customers.
We have systems like PCI DSS, a unique credit card security level of which we are accessed and checked each year by an international certifying body. The second thing is that security is not just about preventing people getting to your information; it is ensuring a continuous flow of information within the data centre. IXcellerate has multiple networks so if one system fails; we can always move to the next. We have 37 carriers connected to our Moscow data centre, meaning a maximum level of connectivity regarding network and information flow.
6. Why should International Companies choose IXcellerate in Moscow?
IXcellerate is a continuation of data centre experience that started in 1998. There is a lot of experience and expertise that have been built up over the years, culminating in the creation of IXcellerate. The team is experienced and has been dealing with banks, stock exchanges, global Internet players, global media companies, global retail companies for many years. We understand how to treat these companies and what their priorities are, how we can help and assist them in their growth. We are the “GO TO” data centre company in Russia for international companies. With a multilingual team, we communicate to our international customers in a variety of languages including, Japanese, Chinese, German, French, as well as Russian and English.
7. How do you ensure that businesses feel comfortable, being your customers? What’s your approach to customer satisfaction?
Annually we survey of our entire customer base, to ensure we understand their views. This was completed again in 2017, and we were pleased to have a very high satisfaction level (more than 90%).
We were also called finalists in the DCS Excellence in Service Award in 2017.
Our customers claim that each IXcellerate employee provides excellent assistance and the duty shift response time is within ten minutes, which is several times quicker than the market average. As I’ve said, we’ve been dealing with very sophisticated customers for many years, we have a lot of respect for our clients, and we like to continuously keep in touch with them, so we are aware of any issues they might have.
Within the Moscow One campus, we have specific areas devoted to customer experience onsite. Customers are able to listen to music, drinking coffee and relax in the open air – we maintain a high level of customer satisfaction, by listening and responding to customer needs.
8. Can you please let the readers known your top three to do list for International Business?
Regarding the data sovereignty, the top three items will be:
1. Evaluate where you are currently today.
2. Evaluate what new regulations are coming into those specific countries.
3. Formulate a specific plan to work towards satisfying those requirements because the requirements are not going to disappear.IXcellerate operates the leading carrier-neutral Data Centres in Moscow (Russia). IXcellerate offers pure-play co-location designed to meet the standards of financial institutions, multinational corporations, international carriers and major content providers. The business aim is to deliver top “European” service level to all customers in Russia. Moscow One Data Centres, designed and operated at Tier 3 standards, sits at a prime location in Moscow, where power is available the in abundance both now and in future. The facility offers unique hosting solutions and tailor-made colocation with respective security and low latency links provided to and from other global markets.
 The data centre features a multi-level fire safety system:
• Level 1: Keeping the data centre tidy.
• Level 2: Regular inspection of the data centre by maintenance staff (4 times a day per specially assigned roadmaps)
• Level 3: VESDA LaserPLUS, a smoke detection system monitoring the air in the server rooms and providing early warning. It detects fire hazard very early, before the emergence of smoke or fire.
• Level 4: An automatic fire notification system and fire alarm system.
• Level 5: In case of fire, response by data centre staff using basic tools (handheld fire extinguishers) and HI-FOG fire hoses.
• Level 6: Automatic fire suppression with HI-FOG.
Steve Hone CEO, The DCA
As 2017 ends there is a great deal to reflect on and look forward to in the year ahead.
Brexit has dominated the news in 2017 and the uncertainty surrounding this has continued to cause both business and investment challenges. Having said that, overall the data centre sector continues to grow and mature as it attempts to keep pace this the insatiable demand for digital services.
This fast pace of growth is not without its challenges and growing pains; which was self-evident following several rather high-profile DC related outages which when combined grounded more than 1200 flights and stranded over 100,000 passengers. Although we all hope that valuable lessons were learnt to prevent a repeat performance, it has equally served to remind us all just what a mission critical role data centres play in supporting the digital services the we would all struggle to live without.
On that note I wanted to take the opportunity to congratulate Simon Allen on a fantastic job launching The Data Centre Incident Reporting Network. DCIRN is designed to help prevent, what I would term as “Ground Hog Day”. We seem to be guilty of making the same mistakes time and time again rather than learning from each other, after all prevention is always better than cure. This is a great initiative which has the full support of the DCA Trade Association and we look forward to working more closely with DCIRN in the year ahead.
2017 also saw the data centre trade association support a record number of events. In addition to being global event partners for Data Centre World the DCA has helped to organise, promote and support 23 conferences and workshops right across Europe and the Far East. I am equally pleased to confirm that 2017 saw the DCA form a Strategic Partnership with Westminster Forum Projects providing members with direct access to senior level select conferences designed to inform and guide policy both within the UK Government and EU on key topics which have an impact on our sector.
To be outstanding you first need to stand out from the crowd and I am pleased to report that thanks to collaborative support of all its Media Partners the DCA continues to provide a platform where trusted knowledge and insight can be both shared and gained. It is important to note however that none of this would be possible without the continued support of all the members, so thank you to each member who has submitted thought leadership content over the past year and to our very own Amanda McFarlane (DCA Marketing Manager) who personally reviews all your submissions before release.
Research and development continued to be a strong focus for the DCA throughout 2017 with representation offered on all relevant International Standards development groups, The EU Code of Conduct, EN50600 and EMAS. As an EU endorsed Horizon 2020 partner The DCA Trade Association continued to support the EURECA project which has been extended by the EU Commission until 2018 and the H2020-MSCA-RISE project which is a collaboration between the EU and China which runs for another two years.
Although I will leave it up to my esteemed colleagues to offer their predictions for what they think 2018 has in store for us all, I wanted to share some of the plans we have for next year; hopefully you will have a new members portal to play with in Q1, new member support services, more regional workshops and new tailored executive briefings designed to provide both insight and influence when it comes to key issues of focus and importance.
Depending on when you read this issue I hope you manage(d) to all have a well-earned rest over the festive season and I look forward to working with you all in the year ahead.
John Booth is the Managing Director of Carbon3IT Ltd and chairs DCA Energy Efficiency Steering Committee
Data Centre energy efficiency seems to be back on many operators’ agenda’s, not that it really went away but it is good to see it back.
We’ve always known that there are energy efficiency savings to be made in this space, but this time we’re going to look at it from a UK PLC perspective.
I’ve been pondering on the size of the opportunity and what steps operators can take to realise energy efficiency savings over the past few weeks and I was suddenly struck that no one had any real data on how many data centres there actually are in the UK and what their corresponding energy consumption is, so I’ve been conducting some research in order to establish a baseline.
Calculating the actual energy consumption used by data centres in the UK is a very difficult task, and there are a host of reasons for this, however some information has fallen into my hands that may prove useful. I can’t divulge the source at the present time and it could be a wild over or under estimate, but it is being used a source material for a EU project and thus the results and final report will probably be used for official purposes.
The report states that in the UK there are probably around 11,500 “enterprise” data centres, 450 colocation and 25 Managed Service Provider (MSP) data centres.
The last two numbers in my opinion are probably wrong, but this isn’t a gut feel, this is based upon information published by techUK in their “Climate Change Agreement (CCA) for Data Centres, Target Period 2: Report on findings” released in September 2017. That document covers 129 facilities from 57 target units, representing 42 companies. However, we must assume that there are some colocation datacentres that are not in the CCA, perhaps because they only have one facility or that the administration requirements outweigh any financial benefits they may receive.
The total energy used for the organisations within the CCA for the 2nd target period was 2.573 TWh, per year, this has been averaged over the 2 year target period. There has been an increase of 0.4 TWh over the previous target period which reflects the growth of the number of participants in the scheme and their activities.
Let’s look at CCA PUE, the PUE in the base year was 1.95 and the PUE in the target period was 1.80, an improvement but STILL far above where I would expect it to be.
The first big problem is that we have no real clue as to how many data centres, and by the term “data centres” I use the EUCOC definition, being “the term “data centres” includes all buildings, facilities and rooms which contain enterprise servers, server communication equipment, cooling equipment and power equipment, and provide some form of data service (e.g. large scale mission critical facilities all the way down to small server rooms located in office buildings).” there are in the UK. There is no national licensing requirement or regulatory regime associated with data centres and reviews of government websites don’t give us much to work on, so I’m afraid that its estimates, good estimates, but estimates nevertheless. We could use the 10,500 enterprise data centre figures mentioned in the report above, but as this was conducted by a data centre research company, we have to assume an under reporting. Why? Because the questions asked were probably targeted on the words “data centres” and many users do not consider their server, machine or IT rooms to be data centres and thus they go unreported.
My approach is to analyse businesses. In the digital world, almost every business has a digital footprint, so.
In 2016, there were 5.5 Million businesses in the UK, but it’s safe to assume that not all of them have their own server room, as 5.3 Million are microbusinesses, employing less than 10 people, some of these businesses will be using cloud or indeed are so small that the business operates with one computer.
There are 33,000 businesses employing 50-249 employees and 7,000 that employ over 250 employees, thus 40,000 businesses should have some sort of IT estate, and thus would require some sort of central computer room, server room or data centre. The configurations of these facilities will depend on the growth patterns, financing options and level of understanding within the business as to the criticality of the data contained within them.
For, the purposes of this article, I’m going to leave it there, yes, I considered the possibility that some businesses may have branch networks but in my experience, these are likely to be a mini hub, possibly 1 -2 servers and a small network switch located in the manager’s office so not worthy of further analysis and in any case do not meet our EUCOC criteria.
So, approximately 40,000 small computer rooms, server rooms or data centres that may fall into our definition, lets add another 40,000 for non-business IT, including the Government, all 26 central government departments and their local offices, local authorities, the blue lights, being NHS, Police, Ambulance, and Fire services and finally Universities (147) schools (24,372), and other educational establishments. So, 80,000 in all (approximately)
We still have no idea how big/small these facilities are so I’m going to make an assumption that most of these 80,000 are 2-5 cabinets or less and have 50 items of servers/storage/network and transmission systems each. The average server will consume between 500 and 1000watts an hour, according to Ehow.com. If the average use is 850 watts per hour, multiplied by 24 that equals 20,400 watts daily, or 20.4 kilowatts (kWh). Multiply that by 365 days a year for 7,446 kWh per year.
Networking and storage average out at about 250w per hour so roughly half, lets assume that half the IT is servers, and half networking/storage.
We have 50 items so, the total IT energy consumption is going to be 240,900 kWh per year.
We have to assume that these facilities are cooled….badly, so let us revert to the old 1watt of IT needs 1w of cooling and we get 240,900 kWh of cooling energy consumption, giving us a grand total of 481,800 kWh, for one, yes, just one site, at commercial electricity rates this will cost anywhere between 8p – 15p per kWh, so…taking an average electricity cost of 0.12p per kWh
Energy Cost per kWh
And then if we multiply by our 80,000 locations we get to…..
Total Number of Data Centres
Average Energy Cost per annum
Average Energy Consumption for 1 data centre
Total UK PLC Data Centre Energy Consumption
.12p per kWh
Yes, you are reading those numbers right, the total cost of running the 80,000 server rooms is costing the UK anywhere between £4 and 7 Billion pounds per annum (depending on actual tariff). And using 38.54Twh, which is 11.37% of the electricity generated in the UK in 2016.
If we add the CCA calculation for colocation sites, then we have a total of 41.11TWh representing 12.13% of electricity generated in the UK.
That is a lot of juice!
So, I’ll go through how we can grab this pot of energy efficiency gold!
Firstly, a little more clarification, I attended the DCD Zettastructure in early November and spoke to a lot of people about the figures above, most of them, understood my criteria and the rationale, some said that there was NO way that there were 80,000 small server rooms in the UK and that as electricity use had been declining my figures didn’t stack up. Wow, cue more research.
So, we know from the techUK report that there are 129 facilities reporting for the CCA, so I cross-reference those with other data and found that there are 256 sites listed, many of these will be duplicated with the techUK report but there is still a significant variance.
I’m going to stand by the overall figure of 80,000 server rooms and the amount of energy consumed as I believe that this number is probably also under-reported by approximately 10-20,000 sites, these sites are not conventional “server rooms” and include rooms like rail signal controls, telco Points of Presence (POPS’) and Mobile Phone Base Stations, (approximately 23,000)
How much energy can be saved?
From studies undertaken over the last 5 years or so, the minimum you can expect will be somewhere in the region of 15- 25%, this would be by the application of some of the basic best practices (more later) however some organisations may be able to achieve up to 70% but this would be a very radical approach and they would need to be willing to move to a full cultural and strategic change to the entire organisation. From our figures above, you could reduce your server room energy bills by around £10,000-£25,000, this may not sound very much but multiply that by 80,000 and you get a UK PLC saving of many millions. If consumption were to reduce by this amount then it may also mean that one or two less power stations need to be built. Seeing that we (UK Taxpayers) are underwriting the construction this could reduce energy bills even more!
Firstly, the best option is to adopt the EU Code of Conduct for Data Centres (Energy Efficiency), the definition “data centres” includes all buildings, facilities and rooms which contain enterprise servers, server communication equipment, cooling equipment and power equipment, and provide some form of data service (e.g. large scale mission critical facilities all the way down to small server rooms located in office buildings).”
The EUCOC has 153 best practices that cover Management, IT equipment, Cooling, Power Systems, Design, and finally Monitoring and Measurement. The best practice guide is updated annually and is free to download from https://ec.europa.eu/jrc/en/energy-efficiency/code-conduct/datacentres.
It really is all you need to know about how to optimise your facility.
Secondly, using the EUCOC Section 9 “Monitoring and measurement”, get some basic data, how much energy does your IT use, how much energy is used by providing cooling and UPS etc. I almost guarantee that most small server rooms will not have even the most basic measuring equipment, in that case buy some clamp meters and take readings over a month. There is loads of information online on what and where to measure.
We’re looking for 2 pieces of data, first the actual amount of energy and thus the cost being consumed by the IT estate, for too long organisations have had zero visibility of this figure, some basic measurement will provide this actual cost, and then we can start to consider our energy saving options.
The second figure is the calculation of the PUE, Power Utilisation Effectiveness, this is the total amount of energy consumed by the entire facility (that’s everything used for the room, cooling, UPS, lighting, a portion of the room used by the IT maintenance personnel (IT Kit etc) security/fire etc and then divide it by the IT load.
PUE is an “improvement” metric, you calculate your baseline PUE then re-calculate once you have taken some improvement actions, it should be lower with the result of lower energy bills. Once the initial PUE is calculated, and it’s likely to be somewhere in the region of 2-3 you can begin to think about how to reduce energy.
The EUCOC has some information on the intention of the best practice but how to implement this may be a problem for some managers, so continue if you have some basic knowledge, but failing that, undertake some training, all of the major global data centre training providers have at least one training course on energy efficiency, and you can definitely apply the knowledge for any facility.
The EURECA project (for public sector only) will be running the last face to face training course in the New Year, please visit the www.dceureca.eu website for the event details and some online training courses.
If you’re still not sure or are private sector, then you could contact your organisations IT Provider, if they don’t know, then contact me directly for some guidance.
For the last 9 years, everything we know about data centre server energy consumption has been estimated, indeed my figures above are estimates, but in this case, probably nearer the actual figures than most, but I am prepared to assist and organise a full UK PLC survey of all data centres/server rooms/machine rooms/pops/mobile phone towers, transportation signalling systems etc if someone is prepared to pay for it. This will be a massive task and should not be underestimated, we will need create a survey and contact 80,000 plus organisations. Collate the results and analyse the data, however the information that can be garnered will be worth its weight in energy efficiency gold.
Author: Dr. Terri Simpkin - Higher and Further Education Principal, CNet Training
Barely a week goes by without publications, industry events and think tanks declaring that skills gaps, capability shortages or talent wars are ravaging the Science, Technology, Engineering and Mathematics industries (STEM).
Indeed, the rhetoric surrounding difficulties in finding skilled labour across a range of sectors has been decades in the making. It seems that while the issue is well known, resolutions are thin on the ground and what initiatives are in place are failing to keep pace with the rampant demand for people and their skills.
Indeed, attend any industry trade show or conference and any number of good, often repeated, ideas are put forward with well-meaning enthusiasm or disheartened frustration. More often than not ideas from the floor suggest educating more people, better targeted university/vocational curriculum and ‘getting into schools’. Sadly, the issue of skills and labour shortages in STEM is much more complex than these suggestions can cope with individually. If it weren’t we’d have had the situation in hand by now.
While there is any number of published reports on skills shortages across industries, the general consensus is that industry should stop suggesting that the skills gap is a future matter. It’s here now, and has been for at least a decade. A 2016 Manpower report suggests that globally 40% of all employers report experiencing skills shortages, the same as in 2006.
So, what is going on and why is managing our industries out of the skills crisis so difficult? Well, lets look at the Data Centre sector as a case study for why simple resolutions won’t cut it.
It’s not just about skills.
If it were just a matter of educating more people we’d have the situation sorted within a few years. Sadly, structural factors such as an aging workforce, a global market that makes it relatively easy for people to move across borders and traditional workplace structures are hampering efforts to get people into the data centre sector.
Retirement rates, a lack of succession planning and labour turnover contribute to a challenging landscape where the sector is unable to educate people to fill existing vacancies as well as vacancies generated by growth, innovation and market complexities. Long term 'speed to competence' of traditional learning such as undergraduate university (traditional university degrees) degrees, means that capability is often obsolete before a student has even crossed the graduation stage to accept their certificate.
So too, gaps in expectations of hiring managers and recruitment staff may well be turning away people who could do the job, but are falling foul of out-dated competency frameworks, recruitment wish lists (i.e. skills demands that are unrealistic or unwarranted) and slow recruitment and selection cycles.
‘Why wouldn’t people want to work in one of the world’s most dynamic and growing sectors?’
When they don’t know it exists.
The data centre sector has an image problem insofar that it has a blurry if perceptible image among job seekers. It’s generally accepted that most people in the sector, particularly those who have been working in the sector for some time, have fallen into it by some mysterious stroke of good fortune.
Given that many of the industries that the data centre sector has traditionally drawn from (for example, IT, engineering, facilities management, communications) have been lamenting difficulty in finding skilled people for decades, it’s little wonder that the problem has spilled over. However, the sector is still competing for people who have a clearer picture of prospects and expectations in say, traditional mechanical engineering in industries such as manufacturing, than in the data centre sector.
Not only are we competing for skills, we’re competing with a poorly articulated employer brand (reputation) for talented people who already have a good understanding of where other jobs exist. Waiting for people to fall into a data centre by accident is not a good recruitment strategy and it demands a sector wide, globally driven branding and awareness raising campaign in a race to secure talent where our competitors are already two decades in front.
‘We just need to get into schools’
Most sectors have a schools strategy. As with the employer brand problem, the data centre sector is miles behind the curve on this one. In a crowded landscape of well developed, cohesive and supported schools strategies including manufacturing, retail, health and the emergency services, the data centre sector is barely scratching the surface.
So too, there are some more practised organisations working with a well-constructed schools agenda who do it very well; the STEM Ambassador program, for example. Sending well-intentioned data centre employees, managers or leaders into the local school is a great start, to influence school pupils while they are still young.
STEM Ambassadors, are volunteers who are supported by STEM.org.uk; the UK’s largest provider of education and careers support in STEM. Charged with working with schools and colleges with the aim to achieve greater visibility of the sector.
CNet Training, for example, has a number of staff signed up as STEM Ambassadors and are given paid time off during the year to volunteer their time to promote careers in the data centre sector. Managing Director, Andrew Stevens suggests it’s time the sector took a collective stance on an awareness raising campaign and invested time and energy with those who already have capability to generate interest in the sector. Working together with a consistent message will present a far stronger campaign rather than going it alone.
A well-articulated sector strategy that covers all levels of school engagement is required. Dr Terri Simpkin, Higher and Further Education Principal at CNet Training suggests that widening the interest in the sector has to start broad and early. “Waiting until secondary school is far too late” she says. “Building an identifiable and attractive brand starts with getting all children interested in STEM early and then tailoring a data centre specific message later in their school career. Children from non-traditional STEM backgrounds, such as those from lower socio-economic backgrounds and girls, must be engaged early as children start divesting career options as young as 6 and 7 years of age. If we lose them then, it’s unlikely we’ll see them by the time they’re making choices at 14. Addressing a STEM skills shortage is a long-term activity.”
It’s a multi-faceted, wicked problem
It’s been 20 years since Steven Hankin of McKinsey and Co first coined the term ‘the war for talent’. Since then demographic, organisational, change related to workforce characteristic factors have conflated to deliver a set of challenges that requires a smart, quick and consolidated response. It’s time the sector took a strategic and multifaceted approach to the problems associated with filling vacancies imperative to meet growth forecasts and organisational challenges expected in the foreseeable future.
Author: Giordano Albertazzi, president for Vertiv in Europe, Middle East & Africa
As with much of the technology sector, the data centre industry has undergone significant change in 2017. Rising data volumes, fuelled by connected devices and the Internet of Things (IoT), have continued to push infrastructures to their limit, and in many cases, to the brink of collapse.
The past 12 months will likely have been an eye opener for businesses into their readiness for the years ahead. With data expected to continue to grow at a phenomenal rate, companies should be evaluating, planning and investing in updated infrastructures to keep up with the growing demand from consumers, but bearing in mind one priority: consumers will expect a consistent, immediate and uninterrupted service throughout this change.
Looking into 2018, we at Vertiv have identified five predictions to help put businesses on the front foot and start working in the right direction to adapt to these changes:
1. Emergence of the Gen 4 Data Centre: Data centres of all shapes and sizes are increasingly relying on the edge. The once promised Gen 4 data centre – a modular, prefabricated and containerised design – will emerge in 2018 as a seamless and holistic approach to integrating edge and core. The Gen 4 data centre will elevate these new architectures beyond simple distributed networks.
This is happening with innovative architectures delivering near real-time capacity in scalable, economical modules that use optimised thermal solutions, high-density power supplies, lithium-ion batteries, and advanced power distribution units. The combined use of integrated monitoring systems will then allow hundreds or even thousands of distributed IT nodes to operate in concert, ultimately allowing organisations to add network-connected IT capacity when and where they need it.
2. Cloud Providers Go Colo: Cloud adoption is happening so fast that in many cases cloud providers can’t keep up with capacity demands. In reality, some would rather not try. They would prefer to focus on service delivery and other priorities over new data centre builds, and will turn to colocation providers to meet their capacity demands.
With their focus on efficiency and scalability, colos can meet demand quickly while driving costs downward. The proliferation of colocation facilities also allows cloud providers to choose colo partners in locations that match end-user demand, where they can operate as edge facilities. Colos are responding by provisioning portions of their data centres for cloud services or providing entire build-to-suit facilities.
3. Reconfiguring the Data Centre’s Middle Class: It’s no secret that the greatest areas of growth in the data centre market are in hyperscale facilities – typically cloud or colocation providers – and at the edge of the network. With the growth in colo and cloud resources, traditional data centre operators now have the opportunity to reimagine and reconfigure their facilities and resources that remain critical to local operations.
Organisations with multiple data centres will continue to consolidate their internal IT resources likely to transition what they can to the cloud or colos and downsize to smaller facilities, to leverage rapid deployment configurations that can scale quickly. These new facilities will be smaller, but also more efficient and secure, with high availability – consistent with the mission-critical nature of the data these organisations seek to protect.
In parts of the world where cloud and colo adoption is slower, hybrid cloud architectures are the expected next step, marrying more secure owned IT resources with a private or public cloud in the interest of lowering costs and managing risk.
4. High-Density (Finally) Arrives: The data centre community has been predicting a spike in rack power densities for a decade, but those increases have been incremental at best. We are seeing this change in 2018. While densities under 10 kW per rack remain the norm, mainstream deployments at 15 kW are no longer uncommon, with some facilities inching toward 25 kW.
Why now? The introduction and widespread adoption of hyper-converged computing systems is the chief driver. Colos, of course, put a premium on space in their facilities, and high rack densities can mean higher revenues. And the energy-saving advances in server and chip technologies can only delay the inevitability of high density for so long. There are reasons to believe, however, that a mainstream move toward higher densities may look more like a slow march than a sprint. Significantly higher densities can fundamentally change a data centre’s form factor – from the power infrastructure to the way organisations cool higher density environments. High-density is coming, so prepare for later in 2018 and beyond.
5. The World Reacts to the Edge: As more and more businesses shift computing to the edge of their networks, critical evaluation of the facilities housing these edge resources and the security and ownership of the data contained there is needed. This includes the physical and mechanical design, construction and security of edge facilities as well as complicated questions related to data ownership. Governments and regulatory bodies around the world increasingly will be challenged to consider and act on these issues.
Moving data around the world to the cloud or a core facility and back for analysis is too slow and cumbersome, so more and more data clusters and analytical capabilities sit on the edge – an edge that resides in different cities, states or countries than the home business. Who owns that data, and what are they allowed to do with it? Debate is ongoing, but 2018 will see those discussions advance toward action and answers.
Director Cloud & Software, UK&I at Ingram Micro, Apay Obang-Oyway, discusses the many appealing facets of cloud infrastructure.
Gartner’s report of an $11 billion increase in spending on cloud system infrastructure in the past year alone suggests the growth in cloud technology shows no sign of dwindling. This progression is often attributed to the alluring possibility of outsourcing the responsibility for infrastructure to a third party. However, cloud’s meteoric growth owes itself to a range of additional attributes which, when combined, make cloud a powerful enabler for greater business innovation and agility.
According to research from Vanson Bourne, commissioned by Ingram Micro Cloud in 2017, which surveyed 250 UK-based cloud end users from a variety of organisations across a range of key sectors, over two-thirds (68 per cent) of cloud resellers feel that it is necessary for their organisation to offer 24/7 continuous customer support on top of the cloud platform, with 60 per cent seeing support during the implementation process as crucial. The desire for competitive advantage and ubiquity of information on the value of advanced technologies, is requiring the need for business expertise from third parties, which is increasingly requested from end users.
Economics will always play a key role in influencing a business when making decisions, regardless of the department within the company. Therefore, businesses will be searching for ways to save money, without compromising on quality. By optimising infrastructure and IT management responsibilities within internal teams, businesses can significantly reduce their capital expenditure, which are frequently found to place pressure on an organisation’s bottom line. Consumption-based pricing models further reward companies financially by enabling them to strip out unnecessary spending, freeing up valuable funds to be invested elsewhere within the company.
While the financial benefit is obviously a core driver for a business, cloud is also an opportunity for positive business transformation. The introduction of cloud drives business innovation across the organisation, creating greater value and an enhanced customer experience. Upkeep on IT infrastructure is infamously known as a drain on time and resources. If infrastructure is moved to the cloud, IT staff will find themselves with more time to innovate and allocate their resources towards business value creation activities. The time saved from cloud gives IT departments the valuable time they need to concentrate on engaging the business on their strategic needs and deploying the solutions that enable systems and processes to leap frog their competitors. This move will shift the perspective of an IT department being a cost centre whose role is to keep everything running, to an innovative hub which provides real value to the business.
Vanson Bourne’s research also states that scalability is a strong factor for end user organisations, with 68 per cent of respondents stating it was something they looked for in a cloud solution. This figure provides a clear indication that those using cloud want a solution that provides the ability to innovate.
Regarding innovation, cloud experts will be looking to the future, to observe how cloud will assist the adoption and realisation of value from new technologies. Artificial intelligence (AI) is the new technology that seems to be on everyone’s mind today; despite being in its early stages, AI has already begun to show huge promise. The value of all these technologies is about augmenting the new technologies within existing industries and processes as an enabler to delivering greater value in less time and for less cost.
Cloud has the ability to unite the organisation to enable faster and stronger collaboration across all departments. The exciting opportunity arising from augmenting newer technologies, such as AI, IoT and other cognitive services can deliver fantastic business outcomes from improved processes, greater employee engagement and an enhanced customer experience. Cloud-based estates help reduce the management burden of stretching legacy systems.
DSW has found a way to differentiate itself from the competition, solving a consumer’s pain of needing a specific pair of shoes for an event. Volvo’s “Care By Volvo” plan offers drivers, many of whom are already bucking car-ownership trends of the past, to have trouble-free access to a car.
These are good examples of the radical new types of revenue models that solve pain in the customer experience, enabled by innovative use of technology and the Internet of Things. Sounds great, right? They are brilliant ideas, it’s true. But new business models such as these create numerous issues.
Whether it is rental shoes or cars-by-subscription, logistics take on a whole new flow. It used to be fairly easy with the single direction of the product - from supplier to warehouse to store. Omni-channel made this much more complex due to different fulfilment points, but shoe rental or car subscription make it even more complex again - with products moving all over the place.
Technology requirements become massively important – all of a sudden you have to be able to track a given item through the supply chain – a specific route that will be very different every time, but also the Gartner reverse logistics element too.
New style business models like these mean that business processes become so much more complex than before and this puts pressure on the efficiency of the new - and perhaps existing - operations. To fulfil the needs you will need to consider using IoT to track the product through the supply chain until it gets to the customer. You may need a customer-facing app where the customer can browse the rental product catalogue for a request - but also to increase the length of rental or change the return location if required. And all this needs to plug into the finance, operational and logistical systems that already exist.
This kind of approach is an example of a digital business – one that uses technology to enable a new business model focused on the customer and customer experience. Innovation of this kind is rarely supported by commercial off-the-shelf application software.
To borrow the words of Massimo Pezzini of Gartner: “To make customer experience, IoT, ecosystems, intelligence and IT systems work together, a digital business technology platform must effectively interconnect all these sub-platforms at scale.”
A Digital Business Platform will enable you to rent those shoes in time for the ball. And will help your favourite car company to let you use their car on subscription.
DCS talks to all things cooling with Alan Beresford, MD, EcoCooling, covering the company’s product range, involvement in the Horizon 2020 EU data centre Research Project, a recent launch and expansion plans.
1. Please can you provide some background on EcoCooling – when the company was formed, the key personnel and the like?
EcoCooling was formed in 2002 to provide low cost evaporative cooling systems for industrial applications. I came from an industrial background where I managed large factories which needed cooling but were very cost conscious – the evaporative cooling solution appealed to me and I set up the company to develop this.
2. And what have been the key company milestones to date?
Sorting out legionella control was vital to providing an evaporative cooling system which stands up to real scrutiny. My background of brewing and food production was central to the configuration of the EcoCooler control system and the adoption by the UK markets.
3. Please can you give us an overview of EcoCooling’s technology portfolio?
We supply direct ventilation and evaporative cooling systems with controls and ancillary items like very efficient EC fans. With my team of engineers we both design the cooling systems and also provide detailed application support to installers and end users.
4. What exactly is evaporative cooling?
On warm days in a temperate climate simply putting the outside air in contact with water cools it down. The evaporation of water takes energy out of the air which then cools down. This means we can keep a data centre under 24C all year around without any mechanical refrigeration.
5. And where do terms such as ‘fresh air’ and ‘direct adiabatic’ cooling fit in?
Data centres normally have closed loop cooling. Fresh air or Direct Cooling means that outside air is brought into the data centre.
6. And then EcoCooling talks about ‘free cooling’?
In a temperate climate it is cold outside most of the time so there is no need to cool the air. Therefore by using outside air which is cooler than that required by the data centre needs to additional cooling and so is FREE. This is not quite true as you consume power in fans but it is normally less than 10% of the operation cost of normal refrigeration based cooling.
7. What are the benefits of the technologies that EcoCooling offers for data centre cooling?
EcoCooling systems for data centres offer a number of benefits beyond operating at less than 10% of the running cost of CRACs. Very simple maintenance and operation reduces dependency on expensive support staff. Modularity allows CAPEX to reflect business growth with simple redundancy.
8. EcoCooling offers an internal product range?
The internal cooler we developed for the UK’s largest Telco provider provided the starting point for what we now call the CloudCooler range. These free standing, plug and play, coolers provide a range of fresh air options for data centres and support the full range of IT equipment.
9. And an external product range?
The external range of coolers are normally used in retro-fit applications. These adiabatic cooling systems are normally installed in live data centres and run in parallel with existing refrigeration systems which can be retained or de-commissioned at the end for the project
10. What are the factors to consider when deciding between internal and external cooling options?
It is all about the building and the application. Our applications engineers work with the stakeholders to develop all options for the installation and controls to reflect the location and other business drivers such as resilience.
11. EcoCooling recently launched the CloudCooler© range – can you talks us through this?
The CloudCooler range is ideal for customers with rapid deployment strategies. The simple design means low TCO for end users and plug and play modules allow for flexible installation according to demand.
12. And how important are the control systems when looking at an overall data centre cooling solution?
Very important for an evaporative cooling system where you can easily get yourself into a ‘doom loop’ here at EcoCooling, where you end up bringing air over the wetted pad twice, this can end in catastrophe for a data centre application. In addition to basic principles, our control systems have been developed with ASHRAE compliancy and maximum efficiency in mind.
13. EcoCooling is involved with the Horizon 2020 EU research project focusing on the data centre of the future. Can you tell us more about the project?
We are really excited to be working with a European wide consortium of companies and research institutions in order to develop the world’s most efficient data centre. Whilst this is a great opportunity for us, we hope the research and design can be used as a template to help others building data centres to increase their efficiency.
14. Can you give us brief details of EcoCooling’s current footprint in the data centre market – in terms of geographical and industry-by-industry reach?
EcoCooling operates mainly in Europe, although we have data centres in places such as New Zealand which use our equipment. We have seen a massive growth in the Nordics recently due to the low cost power and low temperatures. We expect demand for our equipment to increase there and also across Eastern Europe,
Data is an incredible resource that businesses can harness to gain insight and tackle even the most complex of business problems. In fact, developing a data-driven approach across an organisation is a key priority for 73 percent of CIOs, according to our recent survey of 100 UK CIOs. The best way to understand how crucial data is across all sectors of business is by looking at the level of investment leaders are making in order to build their data capture and analytics capabilities.
By James Eiloart, Senior Vice President EMEA, Tableau.
As a result, data has caused a structural reform in many businesses as they create new job roles to handle the interpretation and utilisation of this beneficial asset. The rise of the ‘data scientist’ will only continue, and those who are able to harness the power of data by turning information into actionable insights will be prosperous in the job market.
Although these specialists are highly regarded when approaching complex data sets and larger issues, the focus on data should not end here. Data is powerful and can lead to breakthroughs in a number of a company’s most important areas. However, in order to optimise the potential for business value, it is important to go beyond the scope of a small group of experts. Data should be accessible to every employee within a company, and every employee should possess the necessary toolkit to understand the data they are using.
Without a doubt, data is penetrating every industry, every company and ultimately every employee within a business. No matter how small a business is, the insights provided can lead to transformational results.
It is easy to make data accessible to every employee within a company. However, in order to see results, employees must perceive data as being a valuable and routine resource. The best way to do this is by instilling a culture of data analytics. I am fortunate enough to work for a company that not only creates great software for data analysis, but we use our own product within every team and at every level to make decisions. With a transparent approach, keeping data at the heart of every discussion, we are truly data-driven.
My experience at Tableau has taught me that there are six key elements needed to foster a data-driven culture:
1: Empowerment leads to transformation
Business change must start with the individual employee. The key is to help them feel energised, capable, and determined to make the organisation successful. An introductory period is essential for this where employees feel can ask questions and explore data in a way that sparks curiosity and intrigue. ‘Data for the few’ is an outdated model that companies must abandon in order to nurture a data-inspired workforce. The onus therefore falls on senior leadership to ensure data is accessible and that employees are equipped with the necessary technology tools that enable them to cultivate it efficiently and effectively. Learning new skills to tackle data empowers employees, creating a positive transformation within the business and for employee morale.
2: Provide the correct toolkit
Fostering the right environment means providing the right training. Data can be confusing and intimidating for most. The use of today’s self-training tools, which offer use-cases, online videos, and more, can help mitigate the daunting goal of understanding data and its role in a specific employees work scope. Training should focus on features and functionality, but there is a bigger picture to take into account. This type of upskilling requires special attention towards human skills like critical thinking, analytical curiosity and a foundation in relevant fields like data visualisation. Outside experts are a great source of knowledge and ideas that can help with this shift of employee mind-set. Employees who are properly skilled and inspired will become confident in using data beyond their original scope.
3: A support system to tackle unease
Confidence is key in order to experience a truly empowered workforce. Providing the correct tools can only go so far, especially when dealing with large data sets or complex questions that can make employees feel uneasy or unsure at first. Support from leadership is vital in cases where a company needs to move beyond a “need-to-know” mentality – otherwise, the entire transition will be for naught. Addressing employee unease and then taking steps like providing guidelines and support will help counter this.
4: New talent with an affinity for analytics
In forming an analytics culture, it becomes clear that there are gaps to fill with new talent in order to foster its development. When looking at new candidates, data skills should play a role in the process of selection. However, an ideal candidate isn’t necessarily the one that looks the best on paper from a technological standpoint. It is true technical skills are vital in some roles of a business, but they do not trump the one non-negotiable to fit an analytics culture: critical thinking and an innate sense of curiosity. Logic and reasoning tests that stretch analytical capabilities should be part of the interview process. Those candidates that excel and show a level of curiosity are better able to work autonomously and thrive in an analytics culture.
5: Incorporate data into the company language
Senior leadership can instil a data-driven culture by prompting, even insisting on data-driven answers from their employees. This helps encourage and reward the correct type of thinking. Senior leaders should phrase their questions in order to warrant recommendations backed by data. Even more crucially, this expectation should extend beyond middle management into their respective teams. This creates a space where data literacy encompasses all conversations and every employee feels confident in backing up their answers with the appropriate data; from ‘we think’ to ‘we know’.
6: It’s an evolution, not a revolution
Real change takes time, especially when it requires an entirely new way of thinking about a culture driven by data within a business. Nobody can embrace a data culture overnight, so this shift should be seen as an evolution, not a revolution. It requires a dedication to the cause and continuous support and prompting from senior leaders. A year from now, it will be clear which companies took the time and honed in the resources to instil such a culture, and which are left wishing they had taken the first steps.
It takes efforts from every level of an organisation to foster a data-driven, analytical culture. From empowering data scientists and the rest of the workforce, to providing educational tools; from a more focused hiring process, to challenging each department to bring data to the heart of every business meeting. What is most important is that data becomes an equaliser, accessible and optimised by all so that everyone can ask questions and make more data driven decisions.
While all six points vary, one factor remains consistent throughout. Change cannot be successful without strong leadership. Guidance and support is essential to driving a workforce towards a data awakening.
An IT third-party support and maintenance company exists to provide maintenance and support services, typically for older, legacy IT equipment, independent from the IT vendor.
By Simon Bitton, Director of Marketing, Europe at Park Place Technologies.
When considering how to ensure that you get the best support for your IT storage and server hardware and networking equipment, two crucial questions you need to answer;
Myth 1: Only the OEM can provide quality support
In contrast to an OEM focus on their own hardware models, third party maintenance companies such as Park Place Technologies have in-depth skills and training on many different hardware models from a variety of OEMs.
If you have a multi-vendor data centre infrastructure, it should be noted that OEM engineers are not required to be experts in storage, server and networking equipment that is not their own, so their solutions may be limited in scope. In contrast, third party maintenance engineers can be certified to service multiple models from various hardware vendors in order to provide technical support, health checks and the ability to restore data centre uptime quickly.
As a third-party maintenance company, Park Place Technologies can provide our customers with these 7 key values:
1. Maintenance and support centric services
2. Hardware knowledge and impartial, alternative and independent advice and consultancy without vendor bias including ability to cover all hardware regardless of the vendor, model or part
3. Problem solving and quick response on quotation and service delivery
4. Simple billing and flexible contracts
5. Mix and Match SLAs tailored to the requirements of the customer
6. Experienced engineering team
7. Contrary to perception, third-party support companies have access to a wide range of parts and can easily integrate with other IT suppliers/providers to complete major IT projects
Myth 2: OEMs offer a better range of services than Third-Party Support companies
The conflicting interests of OEMs may compromise the service experience for the customer, as OEMs generally prefer to up-sell or replace hardware after three years rather than support the hardware for longer with a support and maintenance contract.
Service Level Agreements (SLAs) provided by the OEM often puts the customer at a cost disadvantage, providing limited support to the customer in order to keep the OEMs costs down and in turn maximising their profits.
Third party maintenance companies such as Park Place Technologies exist to provide service and support, focusing solely on the best interests of the customer.
Third-Party Support/Maintenance Providers v OEMs – points to consider:
Myth 3: OEMs offer the best value solutions
Third-party maintenance companies tend to offer comprehensive in-depth analysis and reporting into their service and solution offerings, being able to suggest and implement creative and innovative upgrade and reconfiguration measures that can save time and money in the long run, without sacrificing functionality.
Being able to successfully extend the life of hardware equipment, third party maintenance providers such as Park Place Technologies are able to help customers review their existing hardware infrastructure and plan for future hardware upgrades and/or replacements, without being forced to undertake typical hardware refresh cycle which will increase their costs, particularly if it’s not required.
The independent and impartial advice of third party maintenance providers can result in better use of existing data centre hardware systems by extending the life of data centre hardware equipment that would have otherwise been replaced by the OEM.
CapEx v OpEx
Myth 4. Only OEMs have all the answers
With OEM firmware, patches and updates, increasing numbers of customers have moved their support to specialist maintenance suppliers such as Park Place Technologies to take advantage of greater service and commercial flexibly and the substantial savings available.
Vendor reactions to this have varied, but in recent years there has been a move by some manufacturers (notably Sun/Oracle, HP & IBM) to try and introduce an element of FUD (Fear, Uncertainty and Doubt) into the decision process by placing certain restrictions on the availability of machine code, firmware, patches and updates.
However, by combining education, planning and a pragmatic approach, we can ensure that any risks are minimised and that substantial savings available can be realized.
In our experience:
The Hidden Myth: Why aren’t more people exploring Third-Party Maintenance
In March 2016, Gartner reported that; “End-user interest and demand for alternatives to OEM support for data centre and network maintenance are increasing, fuelled by a need for cost optimisation, particularly for post-warranty and EOSL data centre and network devices.”
Have you thought about using a Third-Party Maintenance/Support company for your data centre hardware equipment? Companies in the UK and further afield in Europe are either unaware of the value, expertise, knowledge and cost savings that using a Third-Party Supplier/Maintenance Provider can bring.
In Summary, with a third-party maintenance provider such as Park Place Technologies, you can benefit from;
To find out more about how you can benefit from third-party support for your data centre hardware including storage, server and networking equipment support and maintenance services, contact Park Place Technologies today.
Being clever in the cloud is about transforming the way businesses deploy and manage applications. Firms that continually innovate and build a sustainable cloud strategy deliver significant value to customers and differentiate themselves in a crowded marketplace.
By Tristan Liverpool, Systems Engineering Director, F5 Networks.
So, how is it possible to effectively harness the cloud in all its incarnations? What does it take for an organisation to become a Cloud Climber? What does success look like?
It all starts by embracing effective cloud architecture methodologies to drive value across the business through to end customers, all while maintaining application security and data protection. Scalability, flexibility, automation and speed to market are just some of the attributes that define an organisation thriving at the forefront of cloud technology and demonstrating tangible, on-going results.
Businesses today need to move quickly and meet customer demand to be competitive. However, it is important to recognise while public cloud providers may guarantee the infrastructure security, it is the application owner that is responsible for data security. Some firms adopt a combination of on-premises, public cloud, private cloud, or Software-as-a-Service (SaaS) to deliver applications. This approach offers greater flexibility, but can also increase costs and management to secure applications across a wider range of environments. In addition, enterprises need to avoid vendor lock-in to increase operational flexibility and make it much easier to build an effective cloud deployment strategy without having to change underlying network architecture. In this respect, many firms still struggle to achieve agility and efficiency or gain full control over IT and infrastructure.
According to a recent report by RightScale, 85 percent of enterprises have a multi-cloud strategy, up from 82 percent in 2016. Cloud architects typically use a unified platform to deliver and consistently manage application services and associated policies across different environments. This applies to existing applications, as well as new cloud native applications – all without sacrificing visibility, security and control. An entrepreneurial Cloud Climber can do this because they fully understand the options to develop the best cloud architecture solution to drive their business forward and by following some key disciplines.
Security is the primary consideration when navigating the cloud. Identifying which applications to move and which to retain in the data centre is a fundamental decision. It is also vital to quickly identify if the cloud vendor’s native security is sufficiently effective.
Prioritising and classifying commercial apps is central to an effective cloud migration strategy. Apps should be analysed and classified. Apps approaching expiration would most likely remain in-house until obsolete. Some apps need to be re-architected into a cloud-native form. Others must be moved through a hybrid “lift-and-shift” method to replicate the app in the cloud with minimal adjustments. Determining the right cloud environment with the right support services optimises app performance, increases protection and improves traffic management.
At the infrastructure level, the cloud can be more secure than private data centres. However, it is essential to determine whether in-house IT can manage the complex security services or if off-the-shelf services by a cloud vendor is a better option. Achieving greater visibility and deeper intelligence into the traffic on your network and in the cloud requires effective analytical tools. In addition, a powerful, flexible and multi-environment WAF can deliver comprehensive app protection and provide access to the latest attack data helps ensure compliance.
Mitigate risk by using solutions providing dynamic, centralised and adaptive access control and cloud federation for all applications wherever they reside. This enables customisable security policies that follow the apps, securing authentication for users anywhere, on any device. Protecting data with an easy to deploy next generation DDoS solution guards against the most aggressive and targeted attacks.
Successful Cloud Climbers unify departments with an integrated cloud strategy wherein architects, DevOps and NetOps work towards a common goal. A cloud architect, for example, may utilise tools that can help to automate IT infrastructure and train staff to be more efficient. Alternatively, adopting a DevOps model to collaborate other teams can address issues facing many other areas of the business. By changing working process and procedures, organisations get the best from staff and maximise technology through their cloud project. Devising a sustainable cloud training programme for key staff will make the cloud transition easier and arm decision-makers with better knowledge.
Breaches pose the biggest threat to cloud infrastructure. By monitoring cyber threats, IT departments can immediately recognise a variegated and rapidly evolving threat landscape. They also build up a better understanding of attacker behaviour. Recognising the full security threat landscape must apply to all types of clouds, allowing staff to maintain access to SaaS, platform-as-a-service (PaaS) and infrastructure-as-a-services (IaaS) scenarios.
Cloud Climbers invest in the best multi-cloud solutions that provide programmable application services capable of integrating into any public, private, hybrid or colocation cloud solution stack. Optimised tools deliver services in an automated, policy-driven manner that meet security and compliance requirements without slowing development teams. Cloud agnostic technologies can also span, provision, manage and scale in any of the existing leading cloud providers and the hybrid cloud. The best application delivery services are independent of the environment, precluding cloud lock-in, and can increase speed to market and reduce future switching costs.
To be a Cloud Climber you need to innovate and focus on return on investment. You need to have the skills to evaluate options and make complex systems function seamlessly. The right approach successfully completes ‘dev-and-test’ projects and expands operations into the cloud without sacrificing omnipresent application performance, control or security. An effective cloud architecture strategy increases business agility and provides flexibility to scale based on shifting hardware, software and on-demand requirements. Meanwhile, application control, access and security ensure optimal service performance, availability and security. The right approach is a comprehensive multi-cloud solution that drives innovation and adds customer value. In an era of digital transformation, now is the time to start climbing.
We believe that the single biggest threat to a business of breaching the new GDPR legislation will be a result of poor test data management.
By Dan Martland, Head of Technical Testing Practice at Edge Testing.
Test data management covers a wide aspect of specific quality assurance driven disciplines that support all IT and Business driven test phases. Test data can be the ‘forgotten man’ when building business driven test scenarios. From a GDPR Business compliance viewpoint it will however, need to take centre stage.
Test data management typically covers several key quality assurance driven activities that promotes rigorous testing and includes:
Customer data is critical when building new services, and regardless of how companies and other organisations are using that data, they will all now be facing the same General Data Protection Regulation (GDPR) challenge, how to mask that customer data or build accurate and useable synthetic data, while retaining referential integrity for testing.
The imminent arrival of the GDPR in May 2018 is bringing the testing community to the forefront of data handling practices. The penalties for non-compliance are now well known, so that testing for GDPR compliance will now become an embedded feature and may potentially drive late phase testing lifecycles.
Despite the fact that organisations accessing or processing EU-based personal information must comply with the GDPR only 19 percent of UK CIOs currently have plans in place to deal with it.
Against this backdrop testing providers are being called upon to ensure that individuals’ data is processed securely and that an audit trail is in place for compliance purposes. Here we take a look at the key challenges presented by GDPR and the transformative role test data management can make on the road to compliance.
Individuals will need to provide specific and active consent covering the use of their data, and this demands a change in processes around the gathering and withdrawal of consent. Consent for third-party processing is also affected as the data owner is liable for its data, wherever the data is handled.
Companies will need to define and then manage legitimate data use and length of storage time before archival and deletion. Test data management will be pivotal in supplying evidence to regulators that a business has due diligence in place particularly for exceptions around legitimate business uses such as pursuing outstanding debts, or holding on to an address for a warranted product.
While a copy of real information may have been used to test systems in the past, this simply cannot continue; individuals need to give explicit and informed consent that their data can be used for testing. Making this a pre-requisite to allowing a customer access to your system or service would be looked on poorly by the regulator and could represent a breach of the GDPR regulations in its own right.
Testing new features will be necessary, for instance to determine how a consent slip will be processed. Different test data sets, scenarios and combinations of consent will need to be created, ensuring that all processes can be measured correctly.
The GDPR also states that individuals have the right to data portability. It allows citizens to move, copy or transfer personal data easily from one IT environment to another in a safe and secure way.
Data portability will require testing, and ensuring compliance for data in flight will be a major exercise for organisations that have high volumes of live data in non-protected environments.
Through test data management, testers will have access to the data in a structured and readable format and be able to confirm that the original data has been removed from the ‘source’ system.
All of the above will drive new risk based test scenarios which in turn will impact how late phase testing such as User Acceptance Testing is defined, planned and undertaken. This will have an additional impact on the quantity of ‘Must Tests’ that will need to be executed within a UAT test window.
There are two key data types commonly used in testing: real and synthetic. While the use of synthetic data may potentially be preferable (due to being highly targeted), it is not always possible and in such a case anonymisation would be essential. With anonymization or ‘masking’ at it is sometimes known) while the format of the data stays the same, the values are altered in such a way that the new values cannot identify the original information. Using synthetic data and data masking will be two of the approaches that organisations will now need to consider when they move away from using copies of live production data.
Testing teams can help to create the data, and using powerful tools enable dynamic anonymisation whilst protecting the real source data. Tools enable data snapshots, where users no longer work on the database, but on a snapshot of the data, and this part of the data can be anonymised. Other strategies are also available for example dynamic anonymization where the result of a query is anonymised in real-time so there is no need to take a snapshot.
New assurance processes and procedures will be needed, as personal data should not be exposed to persons who are not authorised to handle it, and anyone who requires such access should be aware of the rules regarding data usage.
Additionally, with the fines for non-compliance so high, there is a need to ensure that any new functionality is not downgraded or negatively impacted by the changes. In terms of ongoing assurance, this clearly increases the need for regression testing across projects. GDPR testing and test cases will now need to be added to your regression pack.
In terms of data discovery, 75 percent of organisations said the complexity of modern IT services means they can’t always know where all customer data resides. A retail client recently conducted a discovery exercise and found terabytes of customer data over ten years old. Under GDPR, the retailer needs to be able to find that data and justify holding on to it for ten years. This will have an impact on Business as Usual and risk management within the organisation.
A similar example is the TalkTalk breach revealed last year, as some of the data that was breached was ten years old. In May, another company that had been purchased by TalkTalk, a small now defunct regional cable and TV company, was revealed as the source of the leaked data.
To avoid non-compliance, documentation of the use of personal data in all test environments is necessary, including backups and personal copies created by testers. An understanding of all real data sources and the current location of data is key to ensuring that no real personal data is exposed to software testers, test managers, business users and other team members during software development, maintenance and test phases.
Some organisations have legacy, poorly supported IT systems, with unstructured data making it even more complex, especially if an organisation has emails on file relating to individuals, containing names, addresses, telephone number and contact information. Those responsible for the GDPR must be able to examine the database, find the related email attachments and data relevant to an individual. This presents a major challenge of scale in testing, as discovery could be the equivalent of searching for a needle in a haystack, and a decently sized haystack is necessary for valid tests. Some IT teams will need to test what they haven’t tested before, such as how can we find data on the system, identify data that’s not supposed to be there, or data that’s been audited out?
Unfortunately, finding the right data within gigabytes of information can be a hugely time-consuming task. Testing teams can help search for the data using the same tools that would be typically used for automation.
It’s not enough to simply think of the GDPR as just another regulatory requirement; the transition to becoming a GDPR-compliant organisation is a major undertaking and maintaining compliance requires an ongoing commitment and new ways of testing.
Indeed, GDPR compliance will be one of the major IT challenges over the next few years, with initial compliance followed by continuous testing of that compliance.
A robust test data management strategy will save money – and indeed could save an organisation full stop. We see the growing importance of test data management and the provision of ongoing assessments to prove compliance, as a necessary investment to guard organisations against non-compliance.
Edge Testing Solutions offers Test Data Management and assessment of ongoing GDPR compliance services.
It’s an always-on world fuelled by on-demand content and services. Instant and immediate is the expectation. Slow and sluggish won’t cut it.
By Mike Hemes, Regional Director, Western Europe, A10 Networks.
This puts service providers and carriers in a unique spot: how do you get an edge over your competition while continuing to launch and monetise new services, attract new customers and, ultimately, boost revenue?
Here are eight ways service providers and carriers can win in an ever-changing market.
1. Prepare for what’s next
Standing still is the kiss of death. As service providers, you must stay up to speed on the next technology shift and solutions that will help optimise your network and applications so you can deliver new, modern services to your customers.
There are several things you can do today to future-proof your network for tomorrow, such as: preparing for the transition to 5G and an influx of connected devices; preserving your investment in IPv4 while you start transitioning to IPv6; focusing on network scale; and taking a security-first approach to protect your network from costly distributed denial of service (DDoS) attacks.
2. Reduce your costs
We’ve entered an era where you have to do more with less. This presents the challenge of whittling down your legacy infrastructure, modernising and reducing your hardware footprint with more powerful, compact and environmentally friendly gear to curb spending.
To start reducing costs almost instantly, you can consider leveraging modern architectures. Adopting software defined networking (SDN), network functions virtualisation (NFV) or cloud computing can reduce your capital expenses and boost your bottom line. Also, leveraging a purpose-built solution for carrier grade NAT (CGNAT) functionality rather than supplementing existing chassis routers by adding software modules to line cards will help reduce infrastructure costs as subscribers scale up.
3. Build consistency
When it comes to delivering services, inconsistency angers customers. Consistency wins. So how do you employ a set of consistent services to ensure customer satisfaction?
To build consistency: ensure your infrastructure can scale to provide around-the-cloud availability even under peak demand; focus on security to ensure DDoS attacks and breaches can’t take down your network or key services; and start converging services through consolidation, optimisation and automation. These steps will ensure you deliver applications and services swiftly, smoothly and without interruption.
4. Empower your customers
Customer empowerment is a major component of modern service provider success. It has become a key to customer satisfaction.
When you implement or enhance the self-service capabilities you offer, you give your subscribers the freedom to tune services to meet their specific needs. Putting control into customers’ hands has powerful benefits for you, such as reducing support volume, saving money, enabling personalisation and increasing customer satisfaction.
5. Speed time to market
You want to get to market faster. It is one area where you can gain a massive competitive edge. Think about it: if you’re first to market with a new service, your competitors are left scrambling. By the time they launch a comparable service, you are already unveiling your next innovation. It’s the ultimate advantage.
The adage rings true: if you’re not the lead dog, the view never changes. How do you hit the market first? One word: agility. As a service provider, there are several ways you can achieve agility, which can put you in the lead when it comes to time-to-market. Embracing open-source solutions and cloud computing; deploying new frameworks like SDN and NFV; and leveraging modern architectures such as microservices and containers will help you unlock the agility needed to get to market faster.
6. Deploy on your terms
As a service provider, you need choice. You also need the ability to future-proof your network to accommodate all of the latest and greatest technologies while embracing elastic and agile service delivery.
You have to build and embrace architectures and solutions that enable you to leverage various deployment options. There are several choices: hardware appliances are the tried and true workhorses that deliver reliability, throughput and scale; bare metal enables you to deploy application delivery and networking services as software on your choice of hardware; virtual deployments are flexible and offer advanced on-demand services; and cloud adds new levels of scalability and agility while reducing capex. Having several deployment options in your arsenal will help you succeed.
7. Flex your pricing prowess
Your service can be the fastest. It can perform the best. But if it’s too expensive, you’re going to lose customers. As clichéd as it sounds, money talks. That is why you have to keep your pricing competitive. Charge too much, and you’ll only attract premium customers. Charge too little, and you’re actively participating in a race to the bottom. Pricing is a delicate balance that has to be measured perfectly to succeed.
Offering subscribers a pay-as-you-go consumption model is one way to keep pricing competitive. Leveraging cloud and cloud-native applications make that a possibility. Employing flexible licensing models is also a way you can keep pricing competitive. For example, offer licensing pools and elastic consumption models that correspond directly with usage or planned usage. Subscription- or capacity-based flexible licensing also helps customers only pay for what they need, and not for what they don’t.
8. Strengthen your security
Everything you’ve done to build out a world-class network means nothing if it isn’t secure. Picture this: your services get taken out by a massive DDoS attack. Along with the loss of customer satisfaction – which is huge – you’re stuck with a financial loss and lasting reputation damage.
Fortunately, there are ways to protect yourself, your network and your services from modern, sophisticated attacks that are the scourge of the today’s web. Leveraging solutions that provide DDoS defence, traffic decryption and inspection, and firewall services will ensure your network is protected against modern threats.
Those are eight ways carriers and service providers can ensure their networks and services stay fresh and modern, while staying several steps ahead of competitors at all times.
The upward trend of companies moving towards edge computing is expected to rise dramatically over the next few years, as the growing demand for ‘real time’ processing of data means locating IT capacity nearer to this demand is key.
To address the time sensitive nature of many modern applications, organisations are beginning to look at edge computing as the answer. Edge computing consists of putting micro data centres or even small, purpose-built high-performance data analytics machines in remote offices and locations to gain real-time insights from the data collected, or to promote data thinning at the edge, by dramatically reducing the amount of data that needs to be transmitted to a central data centre. Without having to move unnecessary data to a central data centre, analytics at the edge can simplify and drastically speed analysis while also cutting costs.
Whilst edge computing allows businesses to provide their users with a faster and more efficient service, these distributed and often remote locations can present a new set of challenges for the data centre and IT teams to manage, as they strive to provide an ‘always-on’ application service. There are many benefits to moving to an edge deployment; however, the process can be expensive and risky, oftentimes resulting in the loss of visibility into power allocation. Thus, it has never been more important to have the right technology in place to support edge deployments.
The choice of location for an edge computing deployment is tantamount to its success, as each edge location has its own unique challenges, based on the type of environment it’s in. Remote hardware systems supporting edge and mobile edge computing usually have firmware and/or software running on them that needs to be fault tolerant, robust enough to be able to run consistently through intermittent communications outages and power availability challenges that result from natural disasters or manmade disruptions.
For widely dispersed networking, storage, and computational assets to work reliably, the underlying hardware needs to be continuously powered on, or else capable of being remotely managed. Some key considerations are:
Delivering reliable distributed computing performance in the form of edge, mobile edge, and fog computing requires the usage of intelligent power management that supports both AC and DC power distribution. Intelligent PDUs have a network interface in them that enables infeed power monitoring at a minimum (Smart), and ideally also feature power management to the outlet and device (Switched or Switched POPS). Server Technology delivers the industry's most comprehensive array of reliable AC and DC power distribution units suited to edge computing.
1. Maintain uptime: It can be very difficult maintaining uptime in a remote data centre. Server Technology’s intelligent PDUs come with built-in fault tolerance, as well as real-time branch current measurements and multi-level alerts. Plus, their full switching capabilities enable IT managers to perform remote power management functions for rapid response troubleshooting from any location.
2. Reduce IT staff call outs: By maintaining uptime and performing remote maintenance on edge infrastructure, administrators can save money by reducing the need to call out technicians to travel back and forth between remote locations.
3. Meeting green initiatives: Any enterprise looking to achieve recognition for its use of green technologies in the data centre must also account for its edge facilities too. Intelligent PDUs can provide IT managers with real-time and historical power usage data like crest factor, apparent power, active power, voltage, load (amps) and more.
4. Prevent environmental disasters: Servers are very sensitive to environmental fluctuations and the issue is even more heightened when dealing with the varying locations of edge deployments. Server Technology’s intelligent PDUs can provide SNMP-based email alerts so that administrators can immediately spring to action when environmental conditions exceeded their allotted thresholds. For example, if a data centre gets too hot, a remote switching operation can be executed to safely power down equipment and prevent a fire from breaking out.
5. Cost efficient: At the end of the day, a data centre and all its assets - either remote or onsite - can be a major drain on a business’s budget. A data centre can be the most resource-intensive department in a company. And with an increasing number of executives now outsourcing data centre operations, it’s imperative that IT administrators find ways to slash costs and streamline efficiencies in their facilities. Intelligent PDUs can provide a wealth of power usage information, providing the ability to make critical changes when they are needed to reduce costs in the data centre.
It only takes one or two instances of a locked-up server being remotely reset to pay for the functionality of a Switched PDU.Michael Weinrich, Senior DevOps Engineer, Picturemaxx,
The complexity of the compliance landscape is poised to increase significantly for all companies operating a contact centre.
By Tom Harwood, CPO and Co-Founder at Aeriandi.
Achieving Payment Card Industry Data Security Standards (PCI DSS) remains a key concern for those taking payments over the phone, but the General Data Protection Regulation (GDPR) – due to be implemented in May 2018 – represents the biggest overhaul of personal data management in history. GDPR is arguably the most important piece of legislation that companies will need to comply with. As an overarching EU regulation, it will encompass all European Union (EU) personal data used or held by companies. It also poses a huge financial risk for companies failing to comply.
Many of these requirements, rules and regulations will overlap, and companies will need to be more focused and attentive to compliance requirements than ever before. For contact centres, much of the compliance burden relates to the capturing, recording, archiving and security of sensitive information. Maintaining a fully compliant security solution in-house can be a struggle. As compliance requirements evolve, so too must the technology used to meet them. Keeping on top of this can be draining on budget and internal resources. Forward thinking organisations are using specialist cloud technology to support compliance across multiple regulations and standards. These solutions grow and change over time to meet the needs of the organisations using them, while also offering minimal on-site disruption.
For many contact centres, customer payment data represents a major compliance challenge. PCI DSS rules apply, and when GDPR comes into force customer payment data will fall well within its definition of ‘personal data’. The Cardholder Data Environment (CDE) is therefore a security focal point.
Even within this single area of data, compliance is complex. The CDE can be loosely broken down into four areas – data capture, data processing, data transmission and data storage. Contained within this are all of the physical and virtual components involved in each stage including the network (firewalls, routers etc.), all point of sale systems, servers, internal and external applications and third party IT systems. Each of these elements contributes to the overall scope of the CDE, which must be protected in full. The larger the scope, the more difficult and potentially expensive compliance becomes.
In the CDE example, the key to managing compliance is reducing the size of the CDE scope. By outsourcing key aspects of a cardholder data environment to a third party Cloud Service Provider (CSP), the PCI compliance responsibility is passed on too. With the implementation of GDPR, however, a business will still be responsible for its customer data, even if a third party manages it. Companies will need to think about how they ensure GDPR compliance across their value chain.
They must also remember that in turn, value chain partners are under an equivalent requirement to refuse any instruction that is non-GDPR compliant.
Let’s take one key process – payments. If an organisation uses a traditional call centre to process telephone payments manually, every aspect of that call centre is in scope for PCI DSS, from the telephone agents themselves through to the computers, network and payment systems used. As of May 2018, the organisation will also be required to process and store this data in line with GDPR. This means demonstrating an ability to recall data, and provide customer access when requested, amongst other stipulations.
Switching to a cloud-based payment system meets all of these requirements simultaneously. At the point where a payment is requested, customers are routed through to a secure, cloud-hosted platform where they enter their sensitive information via their telephone keypad. The call centre agents themselves no longer play any part in the collection or processing of the customer’s sensitive data and it never enters the call centre environment.
Cloud-based recording and archive solutions offer the ability to access call recordings and archives from anywhere, at any time through a secure online portal. This is particularly beneficial to organisations sprawled across various geographic locations. In contrast, an on-premises recording and storage solution cannot deliver the same level of flexibility in terms of recording accessibility in comparison to cloud platforms. To meet the GDPR’s governance requirements, compliance officers will need to periodically review archives to demonstrate compliance. Choosing a cloud-based solution will mean data is always easily accessible.
Compliance failures present a range of legal, financial and reputational risks. Potential liabilities include loss of customer confidence, diminished sales, legal costs, fines and penalties. One of the most discussed aspects of GDPR is its explicit mentioning of fines. Whereas the Data Protection Directive simply stated sanctions had to be defined by the Member States, GDPR exactly details what administrative fines can be incurred for violations. The maximum fines depend on what ‘category’ the violation occurs in: for less serious violations, the maximum is € 10 million or 2% of total annual worldwide turnover of the preceding year (whichever is higher); for more serious violations this goes up to € 20 million or 4%. With the additional reputational damage, this could be catastrophic for many businesses.
Businesses have a growing responsibility for their customer data. They will need to question the capability of third parties and the platforms they are using to ensure compliance with a range of rules and regulations. The power, security and flexibility offered by the cloud are impossible to ignore. It is arguably the most secure and most cost-efficient way of processing and storing customer data. The cloud can help close the gap between resource and requirement, offering an affordable and proven route to help companies achieve compliance with multiple regulations simultaneously. No business wants to damage its reputation or bottom line, but rules and regulations are changing. Organisations need to change with them, while looking ahead to the future, if they are to navigate the changing landscape.
Data volumes are escalating. Our access to the internet, mobile phones and other intelligent devices drives rapid growth in the volume of data created, captured and analysed – allowing authorities to be informed about the latest social trends and city planning needs and enabling them to provide greater access to everyday services. Data has become critical to our lives and is the lifeblood of our digital existence.
By Rob Perry, VP product marketing, ASG.
Consumers and citizens enjoy the benefits of a digital existence, as the government and enterprises’ access to a wealth of data enables more innovation, better services and greater convenience. However, the challenge businesses face is how to navigate and manage these unprecedented data volumes while protecting the privacy and security of every customer.
There is a significant gap between the quantities of data being produced that need protection and the amount being secured by the enterprises that collect it – and it will get wider. Every week brings another headline about security breaches potentially exposing records to malicious use.
Inevitably, criminals have been quick to recognize the opportunities presented by the ocean of data available to them, and the world’s regulatory authorities have responded by creating rules that formalise the steps enterprises must take to protect customer and enterprise data.
As enterprises identify and service unique or critical data points to realize data’s vast potential, two crucial interconnected factors will govern their actions: security and the need for regulatory compliance.
A key upcoming piece of legislation will force enterprises to develop new approaches to information management—the European Union’s General Data Protection Regulation. Slated for mandatory compliance by May 25, 2018, the GDPR places significant requirements across all organizations collecting data on European residents to closely manage and track the personal information they collect. The rules affect every entity both inside and outside of Europe that holds or uses personal data of covered individuals.
Every business will need to prove it handles personal data properly. Among other requirements, it will be necessary for companies to show consent to use data collected when required, delete data or correct errors and provide copies of data when asked. To fulfill these requirements, it will be vital to track all uses of personal data and protect the privacy of the individual.
To achieve this, every company housing personal data collected on European residents will benefit from using an enterprise data lineage solution. These solutions can provide quick lineage reports of the source and use of data through the organisation and on-the-spot auditing of all data flagged as personal. Without a data lineage solution, or something like it, your company may find itself halting business to provide manual reports to regulatory bodies.
Enterprises with a lot of information locked up in departmental silos will face particular problems. In some countries, it is not unusual for businesses to have vast volumes of content spread across legacy and modern systems, ranging from 40-year old mainframes to on-premises storage and the cloud.
A Forrester survey commissioned by ASG Technologies found that one of the key challenges identified by the enterprise architecture and operations professionals surveyed, is dealing with their firms’ legacy storage or disconnected content management systems. 25% said their ability to move content to the cloud is hampered by their existing infrastructure. Typically, enterprises are adding to their technical base or technologies supported, rather than replacing them.
Clearly, businesses need to identify and deploy solutions, spanning traditional and new technologies, enabling them to seamlessly access their data, track its lineage across data warehouses and through transformations while maintaining the necessary information to support governance of personal data in order to demonstrate GDPR compliance.
The costs of understanding and utilising data in this complex environment are significant, but the cost of not leveraging accurate data for decision making, failing a compliance audit, or experiencing a security breach are much more expensive, not only from the cost and lost opportunity but also from the impact on enterprise reputation.
The bonus for enterprises that address compliance issues through the deployment of dedicated tool-agnostic data management solutions is their ability to support citizen data scientists with a deep view into the enterprise’s most valuable data. Accurate representations of the data estate will support faster decision-making, providing business agility that will drive immediate results and help build new customer offerings.
Enterprises that identify the data that matters, and then apply the right technology to understand how it was collected, how it is used, determine its quality and the value it provides will be able to respond to immediate opportunities, compliance requests and direct strategic initiatives and will be the winners in the digital life that beckons us.
IT Pod Frames can reduce CAPEX by 15%, whilst accelerating hyperscale and colocation data centre deployments
By Patrick Donovan, Sr. Research Analyst Data Center Science Center IT Division, Schneider Electric.
The need to deploy new IT resources quickly and cost-effectively, whether as upgrades to existing facilities or in newly built installations, is a continuing challenge faced by todays data-centre operators. Among the trends developing to address it are convergence, hyperconvergence and prefabrication, all of which are assisted by an increasing focus on modular construction across all product items necessary in a data centre.
The modular approach enables products from different vendors, and those performing different IT functions, to be racked and stacked according to compatible industry standards and deployed with the minimum of integration effort.
Modularity is not just confined to the component level. Larger data centres, including colocation and hyperscale facilities, will often deploy greater volumes of IT using groups, or even roomfuls, of racks at a time. Increasingly these are deployed as pods, or standardised units of IT racks in a row, or pair of rows, that share common infrastructure elements including UPS, power distribution units, network routers and cooling solutions in the form of air-handling or containment systems.
An iteration of the pod approach is the IT Pod Frame which further streamlines and simplifies the task of deploying large amounts of IT quickly and cost-effectively. A Pod Frame is a free-standing support structure that acts as a mounting unit for pod-level deployments and as a docking point for the IT racks. It addresses some of the challenges facing the installation of IT equipment, especially those that facilitate the services typically requiring modifications to the building that houses the data room.
A Pod Frame, for instance, greatly reduces the need to mount or install ducting for power lines and network cables to the ceiling, under a raised floor, or directly to the racks themselves. Analytical studies show that using a Pod Frame can produce significant savings, both in terms of Capital Expenditure, in some cases up to 15%, and reduced speed of deployment.
Air-containment systems can be assembled directly on to an IT Pod Frame. If deploying a pod without such a frame, panels in the air-containment system have to be unscrewed and pulled away before a rack can be removed. Use of an IT Pod Frame, therefore makes the task of inserting and removing racks faster, easier and less prone to error.
In the case of a colocation facility, where the hosting company tends not to own its tenants IT, it allows all of the cooling infrastructure to be installed before the rack components arrive. It also enables tenants to rack and stack their IT gear before delivery and then place it inside the rack with the minimum of integration effort.
IT Pod Frames have overhead supports built into the frame, or the option to add such supports later, which hold power and network cabling, bus-way systems or cooling ducts. This capability eliminates most of, if not all of the construction required to build such facilities into the fabric of the building itself. This greatly reduces the time taken to provide the necessary supporting infrastructure for IT equipment.
They also allow greater flexibility in the choice between a hard or raised floor for a data centre, for example, ducting for cables and cooling can be mounted on the frame, a raised floor is not necessary. If, however, a raised floor is preferred for distributing cold air then the fact that network and power cables can be mounted on the frame, rather than obstructing the cooling ducts makes the use of under floor cooling more efficient. It also removes the need for building cutouts and brush strips that are necessary when running cables under the floor, thereby saving both time and construction costs.
An example of an IT Pod Frame is Schneider Electric’s new HyperPod solution, which is designed to offer flexibility to data centre operators. Its base frame is a freestanding steel structure that is easy to assemble and available in two different heights, whilst its aisle length is adjustable and can support multi-pod configurations.
It comes with mounting attachments to allow air containment to be assembled directly on to the frame, and has simple bi-parting doors, dropout roof options and rack height adapters to maintain containment for partially filled pods.
Several options are available for distributing power to racks inside the IT pod, including integrating panel boards, hanging busway or row-based power distribution units (PDUs). The HyperPod can also be used in hot or cold aisle cooling configurations and has an optional horizontal duct riser to allow a horizontal duct to be mounted on top of the pod. Vertical ducts can also be accommodated.
Analytical studies based on standard Schneider Electric reference designs provide an overview of the available savings in both time and costs that can be achieved using a Pod Frame. Taking the example of a 1.3MW IT load distributed across nine IT pods, each containing 24 racks a comparison was made between rolling out the racks using an IT Pod Frame as opposed to a traditional deployment.
CAPEX Costs were reduced by 15% when the IT Pod Frame was used. These were achieved in a number of ways. Ceiling construction costs were reduced by eliminating the need for a grid system to supply cabling to individual pods. All that was needed was a main data cabling trunk line down the centre of the room with the IT Pod Frame used to distribute cables to the individual racks.
A shorter raised floor with no cutouts was possible with the IT Pod Frame. The power cables were distributed overhead on cantilevers attached to the frame, therefore no cables were needed under the floor. Further cost savings were achieved by using low-cost power panels attached directly to the frame instead of the traditional approach of using PDUs or remote power panels (RPPs) located on the data centre floor. This not only save material and labour cost but also reduced footprint to free up data centre space for a more productive use.
The time to deployment using an IT Pod Frame was 21% less when compared with traditional methods. This was mainly achieved through the reduced requirement for building work, namely ceiling grid installations, under-floor cutouts and the installation of under-floor power cables. Assembly of the air containment system was also much faster using a Pod Frame due to the components being assembled directly on to the frame.
In conclusion, using an IT Pod Frame such as Schneider Electric’s HyperPod can produce significant cost savings when rolling out new IT resource in a data centre.
Building on the modular approach to assembly commonly found in modern data centre designs, the Frame simplifies the provision of power, cooling and networking infrastructure. Thereby reducing materials and labour costs to make the deployment process far quicker, easier and less prone to human-error.
White Paper #223, ‘Analysis of How Data Center Pod Frames Reduce Cost and Accelerate IT Rack Deployments ‘, can be downloaded by visiting http://www.apc.com/uk/en/prod_docs/results.cfm?DocType=White+Paper&query_type=99&keyword=&wpnum=263
 Statistics from Schneider Electric White Paper #263, ‘Analysis of How Data Center Pod Frames Reduce Cost and Accelerate IT Rack Deployments ‘.
Organisations across all economic sectors are increasingly attracted by the idea of moving to the cloud. Given the benefits, from greater scalability to enhanced collaboration, this interest is unsurprising. Indeed, many are implementing a ‘cloud first’ strategy, focused on using cloud services wherever they can.
By Dave Nicholson, Technical Sales Consultant, Axial Systems.
This does not mean, however, that they no longer have security or data management concerns to address. This wholesale switch to the cloud brings with it threats from external and internal sources, adding to the pressure on IT teams to guarantee the organisation is safe and protected.
For any organisation planning to migrate, the top priority should always be achieving the highest level of data security possible.
One of the most important decisions they will face will be choosing a third-party cloud services provider partner. In doing so, it will be key that they assess whether the terms and conditions set by the services provider align well with the needs and strategy of the business. Also crucial are data sovereignty; what sort of compensation will be paid if security measures fall short and critically, who is liable when something goes wrong?
The very act of moving data to the cloud brings with it inherent security concerns. Businesses need to make certain that any data transitioned to the care of their provider is encrypted the moment it lands. Best practice is for the business to encrypt data itself as it leaves their building. This ensures there are two layers of encryption – so that if one is compromised, one remains encrypted.
While the choice of provider is key, businesses must also decide what data they want to move out of the local architecture and into the cloud and what retain in-house. That’s why we are seeing the hybrid cloud model becoming the de facto solution for businesses, especially larger businesses, who see the benefit of retaining more sensitive customers within local resources.
One of the big issues for any business running hybrid cloud is: do they have a security policy that works seamlessly across on-premise and cloud. If somebody wants to access the business’ on-premise data they go through a gateway: often, a VPN. However, if an employee tries to access data in the cloud, the business is unlikely to have any control over that. That’s because there is typically a standard way of accessing cloud services that is not necessarily aligned with the organisation’s standard security policies.
Many cloud services will come with user name/password authentication out-of-the-box and that is likely to bring further risk. The challenge for the business is to manage and mitigate that risk in the same way as it would its on-premise service risks. After all, cloud data belongs to the business not the cloud service provider, and the business is ultimately responsible for protecting it. And in the age of BYOD where many devices used in the corporate environment are unmanaged, that’s a significant challenge.
This concern over having a seamless approach to on-premise and cloud plays into wider issues organisations have around visibility. Many businesses are still concerned that when they transition data into the care of a third-party cloud service provider, they cannot see what is happening to it.
It’s true that the largest cloud service providers will be able to call on large teams of expert staff; extensive budgets and cutting-edge technology…. but are they able to reflect and replicate the security policy the business itself has set up?
Organisations can set up a virtual private cloud (VPC) of course but if they want to know exactly how all their applications, databases and web front-ends are interacting, they need true visibility – and that will require an additional layer of technology.
So, what are the right solutions to put in place to overcome such challenges? While educating and training employees remains key, businesses also need to find the technological solutions that allow them to mitigate risk.
A key part of this is to step up the level of authentication that devices require before they are given access to data stored on the public cloud. Businesses can, for example, deploy an authentication portal or an access broker, which means that if a user wants to access data in the cloud, they must authenticate via the business’ own domain. This critical touch point enables the organisation to establish full control over who can gain access to its private data and from what devices. By enabling this feature, the business can mitigate risk even further by making the authentication mechanism adaptive depending on who and where the user is; what data they want to access and what devices they are using to gain access to it.
As we have also seen, however, visibility is key here and in line with that many of the leading security vendors are bringing out virtualised versions of their firewalls, capable of sitting in the cloud infrastructure. Why is that so important? Well, if, to take an example, a business has its own data centre and in-house security and policies in place, they effectively have visibility over their data and also a sense of control. However, if the same business then moves some data to the cloud then they no longer have the peace of mind of knowing precisely which data centre it is stored in, which rack it is kept on, or which server it connected to.
A VPC offers one potential route forward. But if a business could instead simply take the same firewall that it is using in its data centre, virtualise it and put it in the cloud, it has effectively, simply extended its security to where its cloud workflows are being processed. It has, in effect, widened its security out of its data centre – from the physical into the virtual world. And crucially too, that security is consistent across the different environments.
Such an approach gives businesses an extra layer of security on top of what the cloud service provider is already delivering. It also means that when the business looks at its overall security estate, it is effectively of no consequence whether the firewall it is deploying and the rule set it is generating applies to a physical data centre or a virtual one in the cloud. There is a single management platform; a consistent consolidated view and the business knows at a glance exactly how many policy violations it has had, regardless of whether these have happened in the cloud or on-premise.
More and more companies today are adopting this kind of approach – and increasingly they are even moving further down the line into the world of containerisation, micro-segmentation and micro-services, to develop smaller security platforms which no longer require an in-built operating system but still retain the same consistent policy engine.
So, in summary, we are seeing a growing number of businesses moving to the cloud and a growing number implementing a cloud first approach - but they still must not neglect the security challenges.
Before businesses move to the cloud, they need to find a provider they can trust; define which services and applications to migrate and then put an effective security policy in place. Across this process, they need to find some form of access broker and an adaptive authentication mechanism that delivers optimum levels of control. And they also need to consider putting in place a virtual firewall as an additional security layer. Do all that and they will have gone a long way towards having a fully secure approach to data access in place and be better placed to reap the rewards that moving to cloud services can bring.
A recent study commissioned by global IT systems management provider, Kaseya polling over 900 small-to-medium-sized businesses (firms of up to 5,000 employees), has revealed that IT executives are failing to recognise the importance of IT security. It indicated that most businesses are placing the issues of completing IT projects on time and reducing IT costs ahead of a secure network, regardless of the concerns highlighted during the recent Equifax data breach that exposed the data of 400,000 UK residents.
By Mike Puglia, chief product officer, Kaseya.
Only 21% of respondents labelled security as the main issue among firms that were deemed ‘efficient’ in their IT operations. This was a surprising statistic considering 40% of respondents across the survey admitted that ensuring compliance and security was their second most important technology challenge in 2017.
The research also indicated that many businesses still struggle to get the most from their IT capabilities, with 83% still focusing on day-to-day IT management tasks that are often time consuming and manual, rather than working with comprehensive and aligned processes.
This finding hints at the reasons why many SMBs struggle to recognise the importance of security: they simply don’t have the capacity or the resource in-house to address it properly. SMBs simply can’t afford the security personnel expense (security experts command top salaries) and don’t have the time to do the constant and detailed work it takes to maintain a safe environment – nor is this a strategic endeavour for busy SMB IT shops.
This lack of focus on security that many SMBs display is in a sense understandable for the reasons given above - but it is nevertheless a serious concern. The threats posed by cybercriminals are worse than ever and the damage that they do is unparalleled. Keeping up is tough enough. Staying ahead seems near impossible, especially for SMBs.
It is, however, an issue that no SMB can afford to ignore. Security is a huge liability and visibility concern for companies who see their reputation and business possibly ruined due to breach publicity and fines for loss of data.
In line with this, a recent survey from endpoint security specialist, Webroot has revealed that 96% of businesses with 100 to 499 employees in the Australia, UK and US believe their organisations will be susceptible to external cybersecurity threats in 2017. Yet, even though they recognise the threats, 71% admit to not being ready to address them.
The severity of the cybersecurity threat underscores the reality that proper up-to-date security practices are more vital than ever to the health and wellbeing of every company, no matter its size. The risks are too high, and the incidence of exposure and breaches is only increasing. A growing number of SMBs are concluding that the best way to gain proper protection is through a managed services approach and through the adoption of managed security services, in particular.
We expect to see more managed service providers (MSPs) rising to the challenge and adding managed security services to their solutions portfolio, including patching and updates; audit and discovery; desktop security and identity access management. Some of these services, most notably patching and software updates, are already offered by many MSPs, but they need to ensure they are optimising the efficiency of the approach as much as possible and focusing on expanding into new security service areas.
Nevertheless, MSPs are on the right track in this regard. They have an advantage in protecting SMB networks since they can manage security for multiple clients, and have the tools and manpower to get the job done right. In many ways, MSPs can mimic what the largest enterprises do for themselves in terms of security best practices but apply these principles to SMBs.
Security needs to be a major priority for SMB companies today. The fact that this is not currently the case reflects the fact that these organisations are preoccupied with immediate issues around sales growth, operational efficiency and cost control. They don’t have the time or in-house resource to devote to security. That’s why outsourcing the function to an MSP organisation that understands this area and has access to the expertise, experience and the technology to deliver a high-quality managed security service, has to make sense.
With global enterprises’ network managers already working in a forest of cloud and hybrid models, multiple WANs and fast-changing branch IT needs, are the latest iterations of SD-WAN really going to give them a way through the trees …to the big prize of agile networks that ensure greater responsiveness to customers along with lower running costs?
By Marc Sollars, CTO, Teneo.
We’ve all seen the predictions: IDC foresees a $6bn global SD-WAN market by 2020 and raft of global carriers have added SD-WAN to their portfolio already this year, so it’s clear that the market is very confident of this technology’s potential.
The future looks rosy, where enterprises are set to become far more versatile by configuring network performance as never before. But despite all the hype, CIOs and network managers need to take a long hard look at their network performance expectations and vendor capabilities, before committing to an SD-WAN strategy. They need to risk-manage their path through the forest to achieve network agility while avoiding potential cost issues or operational constraints in the future.
The rise of cloud has seen enterprises increasingly use hybrids of on-premise and cloud environments to boost their responsiveness. And to further empower local branches and boost DevOps’ innovations, networks have also gone hybrid: many CIOs use a blend of MPLS, broadband, 4G and public Wi-Fi connectivity. As a result, however, CIOs often struggle to maintain networks’ performance, both world-wide and locally, as well as control mushrooming costs effectively. Trouble has long been brewing in the forest.
Multiple WANs mean more responsiveness, but they also lead to continual upkeep challenges including poor branch application performance, connectivity issues and rising network maintenance costs. This applies even where network managers have implemented WAN optimisation in recent years, because it too demands regular upgrades and in-house IT resources assigned to maintaining it. Behind the scenes, there’s some serious untangling to be done even before you can begin to define your new SD-WAN architecture.
Analysts have warned that despite all the network investments in recent years, global businesses still have infrastructures with sub-par connectivity and core applications that cannot respond adequately to volatile trading conditions, new branch office growth and increasingly-mobilised workforces.
Fortunately, the rise of SD-WAN promises CIOs better control of these scattered networks, sites and applications. Layered over companies’ existing connectivity solutions, SD-WAN tools involve applications and data being abstracted from the underlying infrastructures. As a result, global network performance can be controlled, fine-tuned and automated from a central point. This avoids network engineers needing to be on-site at branch offices to carry out manual network configurations.
Using SD-WAN, networking teams can use cheaper links, improve application availability and business units’ productivity, accelerating the set-up of new locations and reducing the ongoing need for in situ maintenance.
Centralised control promises transformation of local branch capabilities. In particular, SD-WAN helps network managers to make smarter decisions about the routes data will take over the network, depending on business priorities for different applications. Enterprises can build in greater bandwidth for local offices or set up failover rules so traffic automatically switches to the next best available route and avoids downtime.
Sensitive application traffic can be sent down high-grade routes such as MPLS with less sensitive material sent down cheaper routes if deemed less important. Before rushing into an SD-WAN strategy, CIOs would therefore be wise to seize the opportunity to assess other connectivity costs, such as enterprise-grade Internet, at the time of MPLS contract refreshes.
While MPLS generally requires the customer to operate expensive edge hardware usually to a carrier’s term contract, SD-WAN flips the cost model to suit the service user, offering a low-cost commodity item with the intelligence or orchestration capabilities provided at the overlay level. SD-WAN is turning such hybrid network cost equations on their head: a US-based analyst recently estimated that estimating a traditional model 250-branch WAN’s three year running costs of $1,285,000 could be reduced to $452,500 through an SD-WAN deployment.¹
Carefully implemented, even in complex ‘brownfield’ IT landscapes, the latest SD-WAN implementations are starting to give CIOs new levels of network control and agility as well as boosting the bottom line for performance and bringing order to network maintenance, support and travel budgets. SD-WAN tools provide the opportunity as Gartner puts it, for IT teams to ‘align’ IT infrastructures with the businesses they serve; but planning for them also demands that commercial and network priorities are considered together and not in isolation.
To ensure effective SD-WAN planning and guard against network agility strategies placing possible constraints on the business in the future, we recommend that enterprises take time to understand their innovation, future business model, infrastructure and in-house resourcing needs. The need for caution naturally extends to selecting the right SD-WAN provider.
When pursuing an enterprise innovation strategy, the CIO needs to assess how delivering new apps for end customers or new DevOps platforms will affect their bandwidth needs or alter network traffic priorities. For example, if the board is rethinking the business model, how many global locations will be supported and what level of access will be given to new classes of remote workers, partners and contractors?
Different global companies have vastly different network agility goals too. An enterprise might be looking for an SD-WAN vendor that provides them with network optimisation and improved efficiency of circuits ‘out of the box’. Another customer might be focused solely on better voice and video solutions for teams working collaboratively.
To achieve true agility, SD-WAN also forces companies to rethink their security postures, particularly where complicated configurations are required. Security of course should be number one priority for any new network architecture, particularly as more applications move to the cloud. SD-WAN provides the opportunity for a more secure architecture than a traditional WAN, but to achieve this, the security team and policies must be integrated from the start. Choosing an SD-WAN provider that also understands the security world will be of benefit here.
Despite the market excitement and so many global players adding SD-WAN to their service portfolios, there are also WAN-focused and specialist SD-WAN vendors that deliver more customised capabilities.
Many carriers are signed on with a single SD-WAN vendor, which means their capabilities are defined by what the individual vendor’s product is designed to do and might not suit their customers’ agility needs. IT teams might be better advised to seek a specialist SD-WAN provider that can map out flexible options, without the need for a long-term deal or a large-scale investment.
Each vendor approaches architectural needs or delivers its commercial offering in very different ways. A global carrier offering connectivity-with-SD-WAN services may be an attractive option to a fast-growing company, especially if it provides coverage in territories targeted by the business. Such ‘one size fits all’ deals are simple and save time in the short-term but the customer CIO needs to be aware that the carrier agreement might limit the type of circuits used, and lock the customer into costly contractual commitments over a longer timeframe.
And amid the rush to provide SD-WAN offerings, there’s the old question of vendor responsiveness and reporting. CIOs in fast-growing or hard-pressed enterprises need to satisfy themselves that the new vendor and carrier SD-WAN services established in the last 12 months are truly flexible enough to ensure manageable costs, responsive support teams and compliance with security regulations. To perform such due diligence takes time, which must be carefully balanced with speed to market to ensure the right results.
As we’ve seen, finding the right way through the trees also demands detailed discussions with SD-WAN integrators and vendors that will find the most appropriate options for a global enterprise with unique network agility demands.
SD-WAN tools will undoubtedly give IT teams agile network options to better support business units world-wide. But our view is that IT organisations in a fast-growing company need to carefully assess their connectivity and performance needs and ask how their SD-WAN vendor will bring clarity to implementing and managing these tools’ costs. If not, the CIO will still struggle to see the management wood for the networking trees.
¹ SD-WAN: What is it and why you’ll use it one day, Network World, June 12, 2017.
The convergence of IoT (Internet of Things) and Big Data is driving development paths in respect of data centre design, build and operation. As organisations develop new strategies to gather, analyse and use data captured from cloud apps, IoT or enterprise systems, the method through which information is communicated across the data centre and enterprise computing environment will require specialised systems development.
By Sander Kaempfer, Business Development Manager, Data Centre, Panduit EMEA.
I will discuss three areas (physical environment, systems architecture and physical layer) across the data centre environment, which I believe are transforming IT for the converged data era.
As application requirements of corporate users become more specific, we are appreciating that the data centre infrastructure required to produce optimum results must adapt. Many corporations are becoming more focussed on specific issues, such as latency (the time data packets take to transfer from source to end-point), which can dramatically effect users or automated management systems ability to respond in the required timeframe. To support its business model, the data centre’s infrastructure must perform to its customers’ latency requirements with high levels of reliability, while increasingly conforming to regulatory standards, whether they are the users’ industry, IT industry or government requirements. Other organisations with owned data centres, may have different requirements, such as, to extend the life of the centre beyond the norm, to maximise the perceived lifecycle costs of the investment. What is clear, is that data centre design is becoming increasingly standardised to provide less complicated solutions to process massively increased data flows.
Whichever the requirement, these decisions are driving the organisation’s data centre and network design choices. The dramatic increase in data throughput across networks is the single biggest influence on our industry, how it is collated, stored, analysed and distributed requires organisations to carefully consider the implications of the systems they are procuring or developing. As organisations seek to reduce their own technology CapEx, the colocation data centre continues to rise as a highly flexible resource to devolve operational applications and storage, whilst maintaining mission critical systems and applications in-house.
Utilising colocation data centres can offer organisations the opportunity to demonstrate increased energy effectiveness, or reduced energy use, as a large enterprise IT site will consume massive amounts of power to operate the IT and maintain the HVAC and internal climate conditions. Reducing the in-house site to mission critical systems and utilising colocation data centres to provide storage and non-critical IT support, can offer energy, real estate and other cost savings.
As the data centre model matures, the industry has improved its knowledge base, and increasingly is implementing best practice, standardised systems that provide greater efficiencies into the design and operation of the site. Environmental factors, as well as energy use concerns, including the drive to lower PUE, have highlighted that mechanical air conditioning often consumes similar amounts of energy as the entire IT facility. Therefore, reducing the energy consumed in maintaining the IT (white space) environment can offer massive savings on operational costs. Not only that, implementing a non-mechanical climate system, such as indirect evaporating cooling (IEC) technology, will massively reduce the build cost of the data centre, and provides greatly reduced ongoing environmental cooling costs in the white space. Another benefit of this cooling method is it greatly reduces the risk of contaminants, from vehicles, industry or agriculture being introduced into the computing space. This ensures that the server, storage and other systems are not degraded by corrosive elements, ensuring that the compute devices maintain optimum serviceability for longer, reducing the requirement for maintenance.
Utilising IEC acknowledges the best practice of hot aisle containment for the compute systems, and allows the systems to operate at higher temperatures, possible 4-5o C higher than a data centre cooled with mechanical air conditioning. The capability to optimise the environment at higher temperature within contained units has been developed with the cooperation of the server, storage and systems manufacturers and operates within their defined operational warranty parameters. Even providing for the occasional, and controlled temperature excursions, during determined time periods. This type of contained system is an example of standardised design that can assist operators to control expenditure, whilst meeting the agility requirements of customers.
Physical Layer 1 Fibre Optic Systems
Fibre cabling systems OM3/OM4 began the trend in affordable fibre cabling with computing environments. Today, the latest OM5 and Panduit’s Signature Core fibre cabling system offers better physical connectivity, and provides low latency, high bandwidth and greatly improved reliability, which is essential as the physical infrastructure is responsible for the majority of unplanned maintenance in the data centre. The latest fibre technology is driving data speed capability ever higher, the standard 40G/50G and 100G being installed today will be eclipsed by 400G and above in the very near future. Increasingly, fibre is migrating from the backbone to being used into the cabinet. The capability to drive data faster and for longer distances on the latest fibre is highlighting its use in the data centres architecture. Increasing market use will drive even faster development, providing greater capabilities within the IT environment.
The systems infrastructure architecture has become another area under pressure to perform as the customers’ requirement for data speed and agility increases.
Traditional 3 layer verse ‘Spine and Leaf’ network architecture
The optimised data centre today, is reducing the layers of infrastructure that introduce latency and reduce bandwidth. Diagram 1, illustrates how data centres and enterprise computing environments are reducing the traditional network architecture from three to two layers. This provides a more direct communications link across the network, allowing a more agile and scalable architecture and increases the fibre links between the core switches and the top of rack switches.
To more effectively route and compute the massive amounts of data now being generated, operators and customers are demanding architectures that provide enhanced deployment of virtual machines and Software Defined Networks (SDN) to increase the compute capabilities. These systems are being grouped even closer together, often within the same cabinet to provide increased network density within a converged infrastructure solution. This trend to provide low latency response is essential as IoT systems continue to come online. Consider the requirements for car-to-car and traffic system-to-car automated interaction and management, and the need for super-fast analysis of the data generated, without which chaos would ensue.
Looking to the future as the consolidation of systems capabilities within cabinets continues, specific use data centres may witness a single layer architecture for applications that require even faster data speeds with fibre infrastructure offering 400G. Developments in fibre technology will no doubt see greater capacity and longer cable lengths together with even higher levels of reliability. Standardisation across infrastructure, systems and technology will provide less complexity, offering faster deployment, higher reliability and less maintenance. Reduction in energy consumption will become increasingly targeted, whether by customers or regulation, and this will initiate more designs that utilise non-mechanical climate control. Telecom operators also need to change and re-engineer their network infrastructure to offer a highly flexible digital infrastructure that supports the increasingly tailored experience customers are demanding.
Big data and IoT is driving the current wave of change in data centre and enterprise technology, and man’s ingenuity to innovate and create solutions is demonstrated on an almost daily basis.