Should we still care about cables? asks Indi Sall, Technical Director, NG Bailey’s IT Services division in one of the articles in this issue of DCS. I’m particularly pleased that he’s written this article for two reasons.
Firstly, the question he asks is one of many that needs to be asked to question long held ideas and traditions when it comes to the world of data centres and IT.
Secondly, and selfishly, I’m pleased Indi asks the question as it’s been a favourite one of mine to ask all manner of IT and data centre professionals over the years and the universal reply has been: ‘How can we ever think of eliminating cables – the security risks alone associated with wireless mean that hard-wiring will always be required?’
To which, of course, one could politely point out that the security of data travelling along cables doesn’t look too clever right now – and certainly not as safe as data kept under lock and key in a filing cabinet and moved around physically accompanied by a courier and/or security guard!!
Let’s move on.
So many traditional views about so many aspects of our lives are being challenged on a regular basis, that it’s perfectly reasonable to ask whether cables are necessary any more. The success, or failure, of each radical new idea stands or falls by doing some kind of a calculation based on the pros, the cons and the cost of the concept – both for the providers and the customers.
Technology-wise, Ethernet is, perhaps, the classic example, where a technology that is some way away from being the best available one, has been universally adopted because of cost advantages – cost of buying and implementing the technology when compared to others, and Ethernet experts are rather more widely available than, say, Fibre Channel ones.
Back to wired v wireless (at last I hear you cry!) and one could easily see the argument for wireless replacing cables – configuration of networks is so much faster, easier and less expensive. These are major advantages when compared to the costs associated with cables. However, for now at least, it seems that the unreliability of wireless networks when it comes to transmitting and receiving data, alongside the security issue mentioned previously, means that business will stick to ‘tried and tested’ cables.
It will be fascinating to see whether this situation does change - there may well come a technology/price comparison tipping point – and what other IT and data centre traditions will go by the wayside. For example, with more and more focus on data mobility and flexibility, will large, centralised data centres begin to be replaced by more, smaller, more localised facilities – and 10 distributed data centres replacing one centralised one might make a difference when it comes to thinking about resilience (you may just have created an N + 9 environment!!).
Change for changes sake is not an easy concept to sell. Change to make a huge difference – whether to quality of customer service and/or the balance sheet and/or the ability to bring new ideas to market faster…well that’s worth thinking about, however unpromising it might sound.
Worldwide spending on public cloud services and infrastructure is forecast to reach $160 billion in 2018, an increase of 23.2% over 2017, according to the latest update to the International Data Corporation (IDC) Worldwide Semiannual Public Cloud Services Spending Guide. Although annual spending growth is expected to slow somewhat over the 2016-2021 forecast period, the market is forecast to achieve a five-year compound annual growth rate (CAGR) of 21.9% with public cloud services spending totaling $277 billion in 2021.
The industries that are forecast to spend the most on public cloud services in 2018 are discrete manufacturing ($19.7 billion), professional services ($18.1 billion), and banking ($16.7 billion). The process manufacturing and retail industries are also expected to spend more than $10 billion each on public cloud services in 2018. These five industries will remain at the top in 2021 due to their continued investment in public cloud solutions. The industries that will see the fastest spending growth over the five-year forecast period are professional services (24.4% CAGR), telecommunications (23.3% CAGR), and banking (23.0% CAGR).
"The industries that are spending the most – discrete manufacturing, professional services, and banking – are the ones that have come to recognize the tremendous benefits that can be gained from public cloud services. Organizations within these industries are leveraging public cloud services to quickly develop and launch 3rd Platform solutions, such as big data and analytics and the Internet of Things (IoT), that will enhance and optimize the customer's journey and lower operational costs," said Eileen Smith, program director, Customer Insights & Analysis.
Software as a Service (SaaS) will be the largest cloud computing category, capturing nearly two thirds of all public cloud spending in 2018. SaaS spending, which is comprised of applications and system infrastructure software (SIS), will be dominated by applications purchases, which will make up more than half of all public cloud services spending through 2019. Enterprise resource management (ERM) applications and customer relationship management (CRM) applications will see the most spending in 2018, followed by collaborative applications and content applications.
Infrastructure as a Service (IaaS) will be the second largest category of public cloud spending in 2018, followed by Platform as a Service (PaaS). IaaS spending will be fairly balanced throughout the forecast with server spending trending slightly ahead of storage spending. PaaS spending will be led by data management software, which will see the fastest spending growth (38.1% CAGR) over the forecast period. Application platforms, integration and orchestration middleware, and data access, analysis and delivery applications will also see healthy spending levels in 2018 and beyond.
The United States will be the largest country market for public cloud services in 2018 with its $97 billion accounting for more than 60% of worldwide spending. The United Kingdom and Germany will lead public cloud spending in Western Europe at $7.9 billion and $7.4 billion respectively, while Japan and China will round out the top 5 countries in 2018 with spending of $5.8 billion and $5.4 billion, respectively. China will experience the fastest growth in public cloud services spending over the five-year forecast period (43.2% CAGR), enabling it to leap ahead of the UK, Germany, and Japan into the number 2 position in 2021. Argentina (39.4% CAGR), India (38.9% CAGR), and Brazil (37.1% CAGR) will also experience particularly strong spending growth.
The U.S. industries that will spend the most on public cloud services in 2018 are discrete manufacturing, professional services, and banking. Together, these three industries will account for roughly one third of all U.S. public cloud services spending this year. In the UK, the top three industries (banking, retail, and discrete manufacturing) will provide more than 40% of all public cloud spending in 2018, while discrete manufacturing, professional services, and process manufacturing will account for more than 40% of public cloud spending in Germany. In Japan, the professional services, discrete manufacturing, and process manufacturing industries will deliver more than 43% of all public cloud services. The professional services, discrete manufacturing, and banking industries will represent more than 40% of China's public cloud services spending in 2018.
"Digital transformation is driving multi-cloud and hybrid environments for enterprises to create a more agile and cost-effective IT environment in Asia/Pacific. Even heavily regulated industries like banking and finance are using SaaS for non-core functionality, platform as a service (PaaS) for app development and testing, and IaaS for workload trial runs and testing for their new service offerings. Drivers of IaaS growth in the region include the increasing demand for more rapid processing infrastructure, as well as better data backup and disaster recovery," said Ashutosh Bisht, research manager, Customer Insights & Analysis.
100% of IT leaders with high degree of cost transparency are on company board, compared with 54% of non or partially cost transparent enterprises.
A survey of senior IT decision-makers in large enterprises, commissioned by Coeus Consulting, found that IT leaders who can clearly demonstrate the cost and value of IT have greater influence over the strategic direction of the company and are best positioned to deliver business agility for digital transformation. Consequently, cost transparency leaders are twice as likely to be represented at board level and thus are better prepared for external challenges such as changing consumer demand, GDPR and Brexit.
The survey of organisations with revenues of between £200m and £30bn revealed the importance of cost transparency within IT when it comes to forward planning and defining business strategy. Based on the responses of senior decision-makers (more than half of whom are C-level), the report identifies a small group of Cost Transparency Leaders who indicated that their departments: work with the rest of the organisation to provide accurate cost information; ensure that services are fully costed; and manage the cost life cycle.
88% of respondents were unable to indicate that they can demonstrate cost transparency to the rest of the organisation.
When compared to their counterparts, Cost Transparency Leaders are:
Twice as likely to be represented at board level (100% v 54%)1.5x more likely to be involved in setting business strategy (85% v 55%)Twice as likely to report that the business values IT’s advice (100% v 52%)Twice as likely to demonstrate alignment with the business (90% v 50%)More than seven times as likely to link IT performance to genuine business outcomes (38% v 5%)
“This survey clearly reveals that cost transparency is a pre-requisite for IT leaders with aspirations of being a strategic partner to the business. Those that get it right are better able to transform the perception of IT from ‘cost centre’ to ‘value centre’ and support the constant demand for business agility that is typical of the modern, digital organisation. Only those that have achieved cost transparency in their IT operations will be able to deal effectively with external challenges such as Brexit and GDPR” said James Cockroft, Director at Coeus Consulting.
Digital transformation trends mean that businesses are focusing more heavily on their customers and are using technology to improve their experience. However, IT departments remain bogged down in day-to-day activities and the need to keep the lights on, which is preventing teams focusing on how they can help drive improvements to the customer experience.
This is according to research commissioned by managed services provider Claranet, with results summarised it its 2018 Report, Beyond Digital Transformation: Reality check for European IT and Digital leaders.
In a survey of 750 IT and Digital decision-makers from organisations across Europe, market research company Vanson Bourne found that the overwhelming majority (79 per cent) feel that the IT department could be more focused on the customer experience, but that staff do not have the time to do so. More generally, almost all respondents (98 per cent) recognise that there would be some kind of benefit if they adopted a more customer-centric approach, whether this be developing products more quickly (44 per cent), greater business agility (43 per cent), or being better prepared for change (43 per cent).
Commenting on the findings, Michel Robert, Managing Director at Claranet UK, said: “As technology develops, IT departments are finding themselves with a long and growing list of responsibilities, all of which need to be carried out alongside the omnipresent challenge of keeping the lights on and making sure everything runs smoothly. Despite a tangible desire amongst respondents to adopt a more customer-centric approach, this can be difficult when IT teams have to spend a significant amount of their time on general management and maintenance tasks.”
Improving customer experience is the second most-commonly-cited challenge by European IT departments (38 per cent), only behind security. Finding the right balance between digital and human interaction was also a common struggle, with 46 per cent of respondents expressing that sentiment. For Michel, this is where the expertise of managed service providers could help to lessen the load and enable IT staff to work more readily on customer-driven projects. When asked about the main drivers behind outsourcing elements of an IT estate to third parties, 46 per cent cited more time to focus on innovation, and 48 per cent said it frees up resource for them to focus on company strategy.
Michel continued: "IT and digital staff are struggling for resource in an increasingly competitive business environment, and the evidence from our survey underlines this. By entrusting the responsibility for keeping the lights on to a third-party, IT departments can relieve themselves of this time-consuming burden, while maintaining peace of mind in knowing that these tasks are in the hands of a partner with a high level of expertise in this field.”
Michel concluded: “Placing faith in a third-party supplier will allow the IT department – and the wider business in general – to focus on what really matters: an ever-improving, positive customer experience. IT innovation will be free to come to the fore, and it will become easier for the goals of the IT department to be effectively aligned with those of the rest of the organisation.”
Cisco has released the seventh annual Cisco® Global Cloud Index (2016-2021). The updated report focuses on data center virtualization and cloud computing, which have become fundamental elements in transforming how many business and consumer network services are delivered.According to the study, both consumer and business applications are contributing to the growing dominance of cloud services over the Internet. For consumers, streaming video, social networking, and Internet search are among the most popular cloud applications. For business users, enterprise resource planning (ERP), collaboration, analytics, and other digital enterprise applications represent leading growth areas.
451 Research, a top five global IT analyst firm and sister company to datacenter authority Uptime Institute, has published Multi-tenant Datacenter Market reports on Hong Kong and Singapore, its fifth annual reports covering these key APAC markets.
451 Research predicts that Singapore’s colocation and wholesale datacenter market will see a CAGR of 8% and reach S$1.42bn (US$1bn) in revenue in 2021, up from S$1.06bn (US$739m) in 2017. In comparison, Hong Kong’s market will grow at a CAGR of 4%, with revenue reaching HK$7.01bn (US$900m) in 2021, up from HK$5.8bn (US$744m) in 2017.
Hong Kong experienced another solid year of growth at nearly 16%, despite the lack of land available for building, the research finds. Several providers still have room for expansion, but other important players are near or at capacity, and only two plots of land are earmarked for datacenter use. Analysts note that the industry will face challenges as it continues to grow, hence the reduced growth rate over the next three years.
“The Hong Kong datacenter market continues to see impressive growth, and in doing so has managed to stay ahead of its closest rival, Singapore, for yet another year,” said Dan Thompson, Senior Analyst at 451 Research and one of the report’s authors. However, with analysts predicting an 8% CAGR for Singapore over the next few years, Singapore’s datacenter revenue is expected to surpass Hong Kong’s by the end of 2019.
451 Research analysts found that, while the number of new builds in Singapore slowed in 2017, the market still saw nearly 12% supply growth overall, compared with 19% the previous year. The report notes that the reduced builds in 2017 follow two years when providers had invested heavily in building new facilities and expanding existing ones.
“Rather than seeing 2017 as a down year for Singapore, we see it as a ‘filling up’ year, where providers worked to maximize their existing datacenter facilities,” said Thompson. “Meanwhile, 2018 is shaping up to be another big year, with providers including DODID, Global Switch and Iron Mountain slated to bring new datacenters online in Singapore.”
Analysts also reveal that demand growth in both Hong Kong and Singapore has shifted from the financial services, securities, and insurance verticals to the large-scale cloud and content providers.
451 Research finds that Singapore’s role as the gateway to Southeast Asia remains the key reason why cloud providers are choosing the area. “Cloud and content providers are choosing to service their regional audiences from Singapore because it is comparatively easy to do business there, in addition to having strong connectivity with countries throughout the region. This all bodes well for the country’s future as the digital hub for this part of APAC,” added Thompson.
451 Research finds that Hong Kong’s position as the gateway into and out of China remains a key reason why cloud providers are choosing the area, as well as the ease of doing business there. This is good news for the city as long as providers find creative solutions to their lack of available land.
451 Research has also compared the roles of the Singapore and Hong Kong datacenter markets in detail. The analysts concluded that multinationals need to deploy datacenters in both Singapore and Hong Kong, since each serves a very specific role in the region: Hong Kong is the digital gateway into and out of China, while Singapore is the digital gateway into and out of the rest of Southeast Asia.
Analysts find that these two markets compete for some deals, but surrounding markets are vying for a position as well. As an example, Singapore sees some competition from Malaysia and Indonesia, while Hong Kong could potentially see more competition from cities in mainland China, such as Guangzhou, Shenzhen and Shanghai. However, the surrounding markets are not without challenges for potential consumers, suggesting that Singapore and Hong Kong will remain the primary destinations for datacenter deployments in the region for the foreseeable future.
Growing adoption of cloud native architecture and multi-cloud services contributes to $2.5 million annual spend per organization on fixing digital performance problems.
Digital performance management company, Dynatrace, has published the findings of an independent global survey of 800 CIOs, which reveals that 76% of organizations think IT complexity could soon make it impossible to manage digital performance efficiently. The study further highlights that IT complexity is growing exponentially; a single web or mobile transaction now crosses an average of 35 different technology systems or components, compared to 22 just five years ago.
This growth has been driven by the rapid adoption of new technologies in recent years. However, the upward trend is set to accelerate, with 53% of CIOs planning to deploy even more technologies in the next 12 months. The research revealed the key technologies that CIOs will have adopted within the next 12 months include multi-cloud (95%), microservices (88%) and containers (86%).
As a result of this mounting complexity, IT teams now spend an average of 29% of their time dealing with digital performance problems; costing their employers $2.5 million annually. As they search for a solution to these challenges, four in five (81%) CIOs said they think Artificial Intelligence (AI) will be critical to IT's ability to master increasing IT complexity; with 83% either already, or planning to deploy AI in the next 12 months.
“Today’s organizations are under huge pressure to keep-up with the always-on, always connected digital economy and its demand for constant innovation,” said Matthias Scharer, VP of Business Operations, Dynatrace. “As a consequence, IT ecosystems are undergoing a constant transformation. The transition to virtualized infrastructure was followed by the migration to the cloud, which has since been supplanted by the trend towards multi-cloud. CIOs have now realized their legacy apps weren’t built for today’s digital ecosystems and are rebuilding them in a cloud-native architecture. These rapid changes have given rise to hyper-scale, hyper-dynamic and hyper-complex IT ecosystems, which makes it extremely difficult to monitor performance and, find and fix problems fast.”
The research further identified the challenges that organizations find most difficult to overcome as they transition to multi-cloud ecosystems and cloud native architecture. Key findings include:
76% of CIOs say multi-cloud makes it especially difficult and time-consuming to monitor and understand the impact that cloud services have on the user-experience72% are frustrated that IT has to spend so much time setting-up monitoring for different cloud environments when deploying new services72% say monitoring the performance of microservices in real-time is almost impossible84% of CIOs say the dynamic nature of containers makes it difficult to understand their impact on application performanceMaintaining and configuring performance monitoring (56%) and identifying service dependencies and interactions (54%) are the top challenges CIOs identify with managing microservices and containers
“For cloud to deliver on expected benefits, organizations must have end-to-end visibility across every single transaction,” continued Mr. Scharer. “However, this has become very difficult because organizations are building multi-cloud ecosystems on a variety of services from AWS, Azure, Cloud Foundry and SAP amongst others. Added to that, the shift to cloud native architectures fragments the application transaction path even further.
“Today, one environment can have billions of dependencies, so, while modern ecosystems are critical to fast innovation, the legacy approach to monitoring and managing performance falls short. You can’t rely on humans to synthesize and analyze data anymore, nor a bag of independent tools. You need to be able to auto detect and instrument these environments in real time, and most importantly use AI to pinpoint problems with precision and set your environment on a path of auto-remediation to ensure optimal performance and experience from an end users’ perspective.”
Further to the challenges of managing a hyper-complex IT ecosystem, the research also found that IT departments are struggling to keep pace with internal demands from the business. 74% of CIOs said that IT is under too much pressure to keep up with unrealistic demands from the business and end users. 78% also highlighted that it is getting harder to find time and resources to answer the range of questions the business asks and still deliver everything else that is expected of IT. In particular, 80% of CIOs said it is difficult to map the technical metrics of digital performance to the impact they have on the business.
By Steve Hone, CEO, The DCA
I would like to thank all the members who contributed to this month’s journal topic. Next month you will be able to hear from some of our many Strategic Partners on the collaborative work being undertaken and The DCA will be providing a review of the EU Commission funded EURECA project which officially concludes this February. Finally, I look forward to seeing you all in London at DCW18, the DCA are on stand D48.
James Cooper, Product Manager from ebm-papst UK, takes a look at understanding the impact of inefficient cooling and airflow management in legacy data centres; and how simple steps, including upgrading the fan technology, can make a system more efficient and give a new lease of life to old equipment.
We are seeing a definite trend towards more investment in data centre efficiencies, by removing legacy components and replacing them with innovative and more efficient systems. Organisations are continuously looking at new ways to optimise their data centre infrastructure and the critical resources they house.
Power and cooling, rather than floor space, are becoming the largest limitation. On average, around 50% of all electrical energy consumed in a data centre is a result of cooling, therefore this has to be one of the main focuses when looking for efficiency savings.
Operators are striving to create more efficiency and better understand how to design their data centres. Effective airflow management is crucial, and upgrading legacy equipment and installing new airflow management systems is a key consideration to acheive optimum results.
An average legacy data centre in the UK has a PUE of 2.5 which means that only 40% of the energy used is available for IT load.
Another way of expressing this is by using the metric DCiE (Data centre infrastructure efficiency), which would indicate that a data centre with a PUE of 2.5 is 40% efficient.
Most of the energy, unless it is a modern data centre, is put towards mechanical cooling; which in the eyes of most IT people is a necessary evil. It doesn’t have to be that way, as cooling methods and technology have also moved on significantly and there are many ways to make it more efficient. Modern data centres designed to use ambient air directly or indirectly, are becoming increasingly popular and show much more efficient results, coming in at 1.2 or better.
There are a range of cooling options in data centres. Traditional CRAC units (Computer Room Air Conditioners), that tended to sit against the walls in the room or in a corridor blowing under the floor have seen a more recent influx of aisle or rack units, taking the cooling closer to the server with DX or water, suggesting higher density cooling capabilities. Direct and indirect fresh air cooling is also seen as a viable option for the UK; since our average annual temperature is 60% of the time below 12degC. Adiabatic cooling has also seen a revival recently, even though it is most efficient in hotter climates.
The problem is that, certainly in legacy data centres, there are limited options in modifying the structure of the building to utilise new ideas. Most data centres run on partial load, and never get anywhere near their original design. Although high density racks are available to maybe get 60kW+ in a rack, in the past few years the average rack density has barely gone above 4kW/cabinet (less than 2kW/m2).
A good strategy is to start with low hanging fruit and be realistic with what your infrastructure can support. This will help narrow the options. The first thing to consider is that two critical components within the cooling system should be the focus; the compressors and the fans. If you can improve the efficiency of the cooling circuit to allow compressors to run for less time, then this will lead to a huge energy saving. If you can use the latest EC fan technology and reduce the airflow when not required, then this will also lead to bigger energy savings.
Air is lazy! If air can find an easy route it will. One of the biggest and easily fixed wastes of energy is the lack of air management. If the air can escape and bypass a server it will do, and it is therefore wasting energy. Plugging gaps and forcing the air to go only to the front of the racks is an easy step into improving efficiency. The Uptime Institute indicated that simple airflow management and best practice could increase PUE’s from 2.5 to 1.6.
Fans are critical to the movement of air around the data centre. Legacy units may contain old inefficient AC blowers with belt drives that break regularly and shed belt dust through the room. They are usually patched up and kept going, because to change a complete CRAC unit can be costly and sometimes physically impossible. Typically, the fans are running at a single speed, and due to most data centres requiring only partial load, airflow management is controlled by shutting off air vents or switching units off completely.
Upgrading to EC fans (a plug and play solution without the need to setup and commission separate drives) is one way to achieve an immediate saving. With modern EC fan technology there is no need for belts and pulleys, and the efficiencies of the motors are significantly higher, >90%. The main benefit is that they are easily and cost effectively speed controlled, allowing a partial load data centre to turn down the airflow to only what is needed. The amount of turn down is dependent on the capabilities and type of unit and whether it is DX or chilled water.
A 50% reduction in airflow can mean 1/8 of the power being consumed by an EC fan and a potential reduction in noise of 15dB(A)! This added to the improvement in cooler running, maintenance free operation and longer lifetimes offers a simple and cost-effective improvement.
The effect of improving airflow within the data centre will mean that upstream systems, chillers, condensers etc can relax and be reduced; so even more energy can be saved. As technology advances at an exponential rate into the future with equipment becoming smaller and faster, the future of cooling is secure. Whatever the choice of medium, there will be a need to keep equipment cool, efficiently.
Innovations and concepts around data centre architecture and design will force the sector to think differently when trying to achieve optimal efficiency and create competitive advantage. Data centres will continue to expand as demands, both social and business, for digital content and services grow. IT loads will become more computational/energy efficient and, as a result, far more dynamic. Thermal management will improve and PUEs will fall to 1.1-1.2 in Europe. Air will take over from chilled water as the most economic cooling solution, and variable speed EC fans will become mandatory for either pressure or capacity control with the modern efficient data centre.
By John Booth, The DCA Energy Efficiency Steering Group Chair, MD of Carbon3IT Ltd and Technical Director of the National Data Centre Academy
Back in May 2015 I wrote the following article for publication in the DCA Journal. Recently, having been asked to undertake some work for an innovative company in this space, I thought I would dig out the article, see if anything had changed and make some edits.
Sadly, I must report that whilst some new companies have entered the space, not much has changed, the underlying rationale still has credence and the sentiment is still correct but there’s still not much movement at scale.
The original article is in black, whilst my updated comments are in blue.
The use of liquid cooling technologies in the ICT industry is very small, but is this about to change? Perhaps, but at a slow pace than I expected.
First, we need to understand why liquid cooling is gaining ground in a traditionally air-cooled technology area. Rack power densities are projected to rise, a recent Emerson Network Power survey indicated that 70% of 800 respondents expected rack power will be at or above 40Kw by 2025 (Data Centre 2025), high rack power drives increased power distribution losses, increased air flow and power consumption, lower server inlet air temperatures and increased chiller power consumption.
Judging from anecdotes from industry, rack densities have remained static in colocation sites and even new builds have not been fitting out for much more than 10kW, we’re not sure that this remains true for hyperscale but then they are adopting OCP which runs on DC. We can rest assured that we still have 7 years left until the forecast is completely trashed!
Energy costs will rise globally sooner rather than later if fossil fuels continue to be the primary source due to scarcity of supply, but even the use of other alternative energy solutions become more widely adopted, the integration of the new plant will require new infrastructure, all of which require massive investment globally.
The costs of which will eventually be passed on to the consumer.
Most of the above still applies, and with BREXIT (which was only a twinkle in a frog’s eye in May 2015) recent news, such as the closure of coal fired power stations in the UK and the dependence on Dutch and French power to supplement our own generation could bring energy shortages to the UK, earlier than I expected.
We must also be mindful of a whole raft of regulation and energy taxation (12) that could have a serious impact.
Thermal management options will dictate rack power density and will have an impact on energy efficiency.
Finally, users are having an impact, high performance computing has always been drawn to liquid cooling options but non-academic uses such as bitcoin mining, pharma and big data, social media, AI and face recognition are all driving a need for energy efficient compute.
Having visited some bitcoin mines, (albeit virtually via a well known online video platform), it seems that liquid cooling is not in their thoughts at the present time, although recent news from Iceland (February 2018) suggested that soon the bitcoin mining community would be using MORE energy than is required for residential and commercial properties which could change the dynamic somewhat. The other users don’t seem to have materialised…yet!
The Emerson Network Power survey suggested that 41% of respondents felt that in 2025 air and liquid would be prevalent, 20 % ambient air (free cooling), 19% cool air, 11% liquid, and 9% immersive.
I’d stand by these numbers, after all, it is 2025, and we are away off yet!
It is the 20% comprising liquid and immersive technologies that are of real interest.
Liquid cooling can be split into four main types, these are, “on chip”, basically liquid cooled chipsets that use heat sinks to dissipate the heat generated and the hot air is expelled in a similar fashion to conventional.
Single Phase immersion, where all thermal interface materials are replaced with oil compatible versions, fans are removed, optical connections are positioned outside of the bath and the servers are immersed in an oil bath.
Two phase immersion, where thermal interface materials are replaced with dielectric compatible versions, fans are removed, rotating media (storage) are replaced with Solid state drives, optical connections are located outside the bath and the servers are immersed in a dielectric solution.
The final method is dry disconnect, where heat pipes transfer heat to the edge of the server, the server connects to a water-cooled wall via a dry interface and the rack itself is connected to a coolant distribution system.
Nothing to see here, the technology hasn’t changed much in three years, and most of that has been developments in heat transfer technology i.e. better values and less leaks and monitoring.
PUE’s of 1.1 or lower have been cited for pure liquid cooled solutions, but these have been largely carried out on single rack installations and may not have considered additional water pumping when installed on a large scale.
Ah, yes, at scale, well that is largely the trouble, no one big user has committed to using liquid cooling at scale, plenty of pilots and demonstrations but none at any kind of scale, although my spies tell me that their have been some developments but none that are ready to be revealed as yet, perhaps in another three years, when no doubt I’ll dust this report off again and have a little revise!
That’s said, considerable savings can be made by switching to liquid cooling, a recent Green Grid Liquid cooling webinar cited figures of 88% reduction in blower power and 97% reduction in chiller power.
I still believe that the savings are possible.
Liquid cooling also has another benefit in that inlet temperatures can be higher (up to 30-40°C) and useful heat in liquid form (up to 65°C) is the output.
ASHRAE released the “W” classifications in 2016, these range from W1 – W3 where the operating range is similar to air cooled equipment, and is designed to integrate with conventional air cooling systems, the W4 range where the input temperature is higher, but the upper range is only 45° and finally W5, the hot water cooling range above 45°C.
This means that the data centre can now be connected to CHP systems or the heat sold for other purpose. The data centre will need to become part of an ecosystem, one where waste water from an industrial process can be used for cooling and where the waste heat from the data centre can be used for another purpose.
Indeed, this presupposes that you have a client who will “buy” your heat locally. In Amsterdam in January 2018, Jaak Vlasveld of Green IT Amsterdam spoke about the concept of “heat brokers” who would sit between data centre operators and potential heat users to deal with the commercial side of things. Personally, I’d like to see some enhanced thinking on this. Perhaps locating heat producers (liquid cooled (medium/high grade) air doesn’t provide the necessary heat transfer properties and is low grade) such as data centres and any one of a host of potential heat users, swimming pools, urban farms, textiles, office heating etc.
The outlook for liquid cooling in the data centre arena.
Clearly, adoption is on the up, from the Emerson survey many of the respondents expected to see more liquid cooled solutions in the space come 2025, which after all is only ten years away.
But, and it is a big but, a lot of investment has been made globally in the construction of air cooled data centres and these are not well suited to the wide scale adoption of liquid cooling technologies. Add the “ecosystem” element, which will no doubt cause some problems with managers and designers and you have another set of reasons not to adopt.
It is difficult to see past the clout of the vendors who have air cooled solutions who ultimately determine what technology is installed, unless they either develop their own tech or buy one of the current players.
Many of the liquid cooled solutions are the domain of smaller, dare I say, niche players and there may be difficulties in a rapid adoption of Liquid through a lack of equipment, skills and infrastructure. This of course will be negated if a big giant comes a calling.
Hmm, I’ll have to polish my crystal ball more often, its looking a little dusty, however I’m pleased with my comments from May 2015, so far, so good.
That said, liquid cooling in my mind is a disruptive technology and we await developments with interest.
Next update in 2021!
By Richard Clifford, Keysource
With heightened competition driving the need for new efficiencies to be found across data centre estates, Richard Clifford, head of innovation at critical environment specialist Keysource, discusses some of the key drivers for change in the data centre market.
With increased competition and tighter margins comes new impetus for operators to identify efficiencies. Implementing more efficient cooling systems and streamlining maintenance procedures are well explored routes to doing this, but they also represent low hanging fruit in terms of cost savings. Competition in the co-location and cloud markets is heating up, and so data centre operators are going to have to be more imaginative if they are to stay ahead of the curve.
Some notable trends are likely to accelerate over the next five years and operators would be wise to consider how they can be incorporated into their estates.
The resurgence of the edge-of-network market is one. This relies on a decentralised model of data centres that employ several smaller facilities, often in remote locations, to provide an ‘edge’ service. This reduces latency by bringing content physically closer to end users.
The concept has been around for decades, but it fell out of favour with businesses with the advent of large, singular builds, which began to offer greater cost-efficiencies. That trend is now starting to reverse, due in part to the rise of the Internet of Things and a greater reliance on data across more aspects of modern life. Growing consumer demands for quicker access to content is likely to lead to more operators choosing regional, containerised and micro data centres.
Artificial Intelligence (AI) is also set to have a transformational impact on the industry. Like many sectors, the potential of AI is becoming a ubiquitous part of the conversation in the data centre industry, but there are few real-world applications of it in place. Currently complex algorithms are used to lighten the burden on management processes, for example, some operators are using these systems to identify and regulate patterns in power or temperature consumption that could indicate an error or inefficiency within the facility. Managers can then deploy resources to fix it before it becomes a bigger problem or risks downtime. Likewise, they can also be used to identify security risks, for example recording if the data centre has been accessed remotely or out of hours and reporting any unusual behaviour.
This is still in early stages of development and at the moment, AI still relies on human intervention to make considered decisions, rather than automatically deploying solutions. But as the industry learns to embrace this tool more, we’re likely to see its capability expand. Specialist research projects such as IBM Watson and Google DeepMind are already focusing on creating new AI systems that are self-aware which can be incorporated into a cloud offering and solve problems independently, lessening the management burden even further.
As the implementation of edge networks grows, it is likely that AI will have a greater role in managing facilities remotely. To work successfully, edge data centres must be adaptable, modular and remotely manageable as ‘Lights Out’ facilities, serviced by an equally flexible workforce and thorough management procedures – a perfect example of where AI can pick up the burden. Likewise, storing information in remote units brings increased security risks and businesses will need to consider a vigilant approach to data protection to meet legal obligations and identify threats before they cause damage. Introducing AI algorithms that can remotely monitor security and day-to-day maintenance will go some way to reassuring clients that these risks can be mitigated through innovation.
Innovation must be a carefully considered decision for data centre operators. Implementing an innovative system represents a significant capital investment and it can be difficult to quantify a return. New processes need to be adopted early enough to give a competitive advantage, while caution needs to be exercised to avoid being the first to invest in brand new technology only for it to become obsolete a year later. Striking a balance between these two considerations will be key for data centre operators looking to grow their market share in such a competitive sector - despite the risk, when innovation works successfully the payoff can be huge.
For information http://www.keysource.co.uk/
By Tim Mitchell, Klima-Therm
One of the key areas of innovation in the data centre industry over the past ten years has been improvement in energy efficiency. It is central to both improving profitability through reducing running costs, and enhancing Power Usage Effectiveness (PUE) and other energy-related metrics to meet the sustainability requirements of corporate clients.
Despite the emergence of more temperature tolerant chips, one of the biggest components of data centre power usage remains cooling. There have been attempts to manage this with fresh-air-only ventilation systems, but issues with latent system requirements, space constraints and concerns around reliability and resilience mean that mechanical cooling – of one sort or another – remains the default choice.
Mechanical cooling relies on refrigerant compressors, an area of technology that had remained more or less static for decades until the emergence of compact centrifugal compressors ten years ago. Their appearance marked the start of a new era for air conditioning efficiency. In those early days, few recognised the impact this rather esoteric new technology would have on the market and the wider industry.
As one of the first adopters in the world of this new approach, I will admit we were slightly mesmerised by the idea of harnessing magnetic levitation bearings in a compact centrifugal design. It was a compelling proposition, as it overcame the need for oil in the compressor, thereby avoiding at a stroke all the problems that accompany conventional compressor lubrication, spanning operation (especially low-load operation as data halls are populated) and the requirements of ongoing service and maintenance.
That advance alone would have been highly significant, and a major advantage for both data centre operators and service companies. However, when you add the exceptional efficiency gains – generally 50per cent better than traditional systems - smaller chiller foot-print, low start-up current, low noise operation, long-term reliability and overall low maintenance requirements, it is not hard to see why compact centrifugal technology has become such a game-changer.
In a nut-shell, it enabled more cooling from less energy in a more compact space, and required less power to start and fewer service visits to maintain. The fact that it is compatible with efficient and stable low Global Warming Potential (GWP) HFO refrigerants is a further major plus for data centre operators looking to future-proof themselves against legislative changes.
It is a classic example of a disruptive technology rewriting the rules. The early sceptics have been proved decisively wrong. With the technology proven, multi-million pound investments are now being made in expanding production of Turbocor compressors and further refining the technology.
With take-up growing across the world, manufacturers of traditional cooling compressors are now looking to develop their own magnetic levitation-based compact centrifugal systems. However, as often with disruptive technology, it is a steep research and development curve and requires very substantial investment.
Early adopters are already working on the next generation of systems, which take the gains delivered by Turbocor-based systems to the next level. A stand-alone compact centrifugal compressor is already highly efficient, and the opportunities for wringing further efficiency gains are inherently limited. However, there are significant opportunities for improving efficiency in terms of the overall chiller design and the performance of other key system components.
Heat exchangers are a key area of interest. For example, the recently introduced Circlemiser chiller from our supply partner Geoclima, based on Turbocor compressors, uses a new design of condenser heat exchangers. This delivers improvements of up to 15per cent on Turbomiser’s already outstanding EER* rating. It is believed to be the most efficient dry air-cooled chiller in the world.
This has been made possible by replacing the traditional flat coils with cylindrical condensers, and the use of flooded evaporators in a cascade system. By packing more active heat exchange surface into a given space, the heat exchange capacity of Circlemiser’s cylindrical microchannel condensers is increased by 45per cent compared with traditional condensers.
Cylindrical heat exchangers increase heat exchanger capacity in both the rejection and delivery sides by reducing condensing temperature as well as evaporator approach temperature. Importantly, this improvement in performance is achieved without increasing the chiller footprint, enabling more cooling capacity in a given space, while significantly reducing energy consumption.
With rooftop and plant room space often at a premium in new and refurbishment projects, this offer a major advantage over both conventional chillers and standard Turbomiser machines.
The use of a cascade system with flooded evaporators helps reduce the ΔT between evaporation temperature and outlet temperature of the chilled water. This increase in evaporation temperature further reduces energy consumption.
Comparing the like-for-like performance of Circlemiser with standard air-cooled Turbomisers (at AHRI/EUROVENT conditions, with the same number and model of compressors), Circlemiser shows an increase in EER of up to +9.5per cent with one compressor, and up to +15per cent with multiple compressors. The highest EER value achieved is 4.35, but the most staggering comparison comes at part load or full load / less than peak ambient temperature conditions, where gains of +25percent are typical compared to even the most efficient existing Turbocor-based machines
This gives Circlemiser equivalent efficiency to Turbomiser chillers equipped with adiabatic evaporative systems (at 50per cent Relative Humidity), but without the additional cost and complication of installation and maintenance associated with adiabatic systems.
It lends itself to use in refurbishment projects, where data centre cooling loads may have increased over time and the existing chiller is now under-sized, but where plant space is restricted and unable accommodate a larger replacement. This situation arises often, particularly in city centres.
While manufacturers of conventional compressor technology seek to catch up with the compact centrifugal revolution, those who pioneered the technology are already several laps ahead – on both the refinement of the base technology itself and its application in ever-more efficient systems. The early adopters bore the risks, but are now harvesting the fruits of embracing this game-changing new approach.
For data centre operators wrestling with the often conflicting demands of rising cooling loads, limited power head-room and site space restrictions, these latest developments in heat exchange and compressor technology offer potential solutions not hitherto possible with conventional mechanical cooling.
For more details, contact Tim Mitchell 07967 030737; firstname.lastname@example.org
*EER = COP coefficient of performance (for cooling), in accordance with ANSI/AHRI STANDARD 551/591 (SI).
By Graham Hutchins, Marketing Manager, Simms - Memory and Storage Specialists
Data centres are facing a tough challenge – do more with less. For them transformation is not a choice any longer, it’s do or die, innovate or stagnate.
Most of us by now have heard Google’s Eric Schmidt’s ‘5 exabytes’ quote, which is the amount of data produced from the dawn of civilisation to 2003, and which we now create every two days. It’s safe to say that data and IT are no longer business enablers, they are the business. The use of data has to impact positively on any company’s P&L.
Focusing on the hardware element of this, positive impact is ultimately achieved when a datacentre or hosting company identifies ways in which it can reduce infrastructure and operational costs, whilst enhancing (best case), not impacting (worst case), customer experience.
This is best achieved when the data use case(s) drive technology selection. This new approach is already being heavily adopted in North America. Tech-savvy system integrators (Google, Amazon, Facebook etc.), with complex and large data requirements, have established that relying exclusively on out-of-the-box configurations from top-branded server manufacturers is no longer offering them the best value. There are options at all levels.
So what changes are we seeing from a memory and storage perspective for the transforming data centre? Memory and storage for the enterprise environment is evolving rapidly in a quest to; enhance QoS, lower latency, perform faster, increase capacity, reduce power, improve reliability and productivity, reduce failure rates, eliminate downtime and give a better return on investment. We also need to understand what’s happening in the market.
With 3D NAND flash production continuing apace (by the end of 2018 3D NAND will account for 74% of all NAND flash) we are seeing some major developments and shifts in data centre memory and storage. 3D NAND is quickly making its way into the next generation of SSD enterprise storage with Samsung, Intel, Toshiba and others already utilising this new technology. 3D NAND opens the doors to all kinds of possibilities; 128-layer is widely touted to be available shortly with some laboratories also showcasing 200 layer flash chips.
In addition to this manufacturers are looking for new ways in which to innovate in a bid to stay ahead of the competition and deliver real value. For example, Intel is breaking ground with its pioneering Optane memory – the first all-new class of memory in 25 years - creating a bridge between DRAM and storage to deliver intelligent, highly-responsive computing for the HPC market.
Intel has also announced plans to release a new form factor for server class SSDs called the ‘Ruler’. The design is based on the in-development Enterprise & Datacentre Storage Form Factor (EDSFF) and is intended to enable server makers to install up to 1 PB of storage into 1U of rack space while supporting all enterprise-grade features. Watch this space!
But what does all this mean for the data centre? Despite being historically cost-prohibitive, SSD technology, with its unrivalled high capacities compared to HDDs, is becoming the mainstay of data centre storage. The game is changing and savvy datacentre managers are looking at new SSD tech to increase performance and reduce cost.
The UK data centre market is fiercely competitive, so how can one differentiate? Aside from the obvious security, compliance, disaster recovery, and SLA considerations (all pretty standard stuff), it really comes down to the performance of kit, which is where SSDs outshine their HDD cousins.
SAS and SATA interfaces have been around for a number of years and offer good transfer speeds but, as with all tech, limitations will depend on the use case. SSD manufacturers are looking ahead to the next generation of interfaces, with manufacturers such as Intel and Kingston really starting to push PCIe NVMe storage as the next big thing. This technology is fast becoming a firm favourite with data centres due, in part, to its blistering speeds.
That said you wouldn’t choose a blisteringly quick, convertible sports car on a rainy day to do lengthy business mileage if you had access to something more economical and comfortable. The same applies for SSD selection, the trick is to understand what benefits each offers to your environment and make an educated decision. Reach out to the technology specialists and manufacturers for help as assumptions can invariably prove costly.
As specialist distributors we witness first-hand what is happening in the memory and storage market and trends indicate that PCIe NVMe is moving and moving fast. 2.5” HHHL AIC and M.2 factors give data centres the flexibility of choosing the right solution for different storage servers. Coupled with 3D NAND flash the performance really is mind-blowing, especially for the next generation data centre. We have seen a substantial upsurge in enterprise storage sales since Q3 2017 for PCIe SSDs. The key thing to note here is that SSD technology is now being exclusively developed for the data centre environment, whereas previous generations were designed for entirely different applications.
These SSDs are having a big impact in how system architects build systems and how developers create applications. The big boost in IOPS they provide helps to keep today’s fast CPUs continually fed with data. But selecting the right SSD is not as simple as it may appear. Major server builders will of course push their approved storage choice and scaremonger data centres about invalidating warranties if non-approved storage is used. However, manufacturers such as Intel and Kingston can offer data centres a choice, backed up with extensive warranties and a robust suite of services that ensure comprehensive protection whilst using their technology.
Power loss protection, faster and consistent read/write speeds and encryption are all key benefits of SSD over HDD. SSDs also consume less power. Given that over 50% of a data centre’s cost is energy-related and the importance of a good PUE rating (Power Usage Effectiveness) as a key indicator of efficiency, the debate gets very interesting. Throw in health monitoring system for SSDs where IT managers and CIOs can pre-empt possible failures and the argument is all but won. You simply cannot do that with HDDs.
In our experience Read/Write IOPs, write bandwidth and endurance are arguably the key performance attributes that data centres are looking for. Using a flash-based PCIe NVMe storage device can help a specialised application, software defined storage or database achieve the performance its users ultimately demand.
With data centres taking on more workload, there is extra pressure on the memory to work harder but is upgrading memory getting easier or more difficult? It really depends on perspective. OEMs will of course push their own or approved 3rd party memory partners for upgrades. However, the cost to upgrade multiple servers can be prohibitive using OEM memory, so why not shop around? Memory manufacturers have come a long way, with DDR3/4 very much a mainstay and DDR5 expected in 2018 (potentially doubling the speed of DDR4). DDR5 is touted to further reduce power consumption while doubling the bandwidth and capacity relative to DDR4 (but until it is fully available we can only speculate). Add to that considerable potential cost savings and it has to be seriously considered.
Optimal memory choice ultimately makes a significant difference to performance. Exploring all the alternatives and selecting memory that is designed and manufactured to your system requirements is vital.
The European Managed Services & Hosting Summit 2018 is a management-level event designed to help channel organisations identify opportunities arising from the increasing demand for managed and hosted services and to develop and strengthen partnerships.
Previous articles here have reflected on the changes that the managed services model brings customers – their abilities to change their buying model to revenue-based, often to do more with less resources, and then adopt new working tools such as analytics, which just weren’t available at the right price before. The impact on the IT industry supplying those customers has been profound as well, requiring a real re-think of sales processes, built around a continuous relationship with the customer, not just a “sell-and-forget” on big-ticket items.
Obviously, the IT channel is attracted by the prospect of more sales by working in managed services, with the world market predicted to grow at a compound annual growth rate of 12.5% to 2019. But it is such a fundamental change in their structures, that some are thinking it a step too far, even under pressure from customers for the benefits that managed services can bring them. Those partners may find themselves rapidly left behind as the new model becomes the standard in most industries.
This, coupled with the ease of entry into the market for cloud-based solutions suppliers, means that the IT channel is having to face a whole new competitive threat. A business “born in the cloud” has an obvious advantage when trying to sell cloud services to a customer, compared with a traditional reseller – the cloud-based channel “eats its own dog-food”, to adopt a rather unwholesome phrase imported across the Atlantic.
So, in establishing the agenda for the European Managed Services and Hosting Summit in Amsterdam in May this year, the organisers are thinking beyond the obvious GDPR issues which will inevitably be in the headlines as its deadline comes round, and even the ever-popular M&A discussions of company value, to bring out a flavour of the sales-engagement process in managed services. We are asking our leading speakers to examine the business processes of the best managed services companies, to try to identify what makes them tick - and tick ever faster and with wider portfolios.
How is the sales process managed? How are the salespeople rewarded in the revenue model? How do they maintain that ongoing relationship with the customer in a cost-effective way? How do they ensure that the salesforce is motivated and retained in the longer term, while keeping them up-to-date with the latest information on the market, the technologies, and customers issues?
None of this is easy, and many managed service providers, integrators, traditional resellers and even those new and fast-growing “born-in-the cloud” supplier companies still have many questions to put to the experts, and the MSHS Europe is the perfect event at which to do this, with many leading suppliers on hand as well as industry experts.
The MSHS event offers multiple ways to get those answers: from plenary-style presentations from experts in the field to demonstrations; from more detailed technical pitches to wide-ranging round-table discussions with questions from the floor. There is no excuse not to come away from this with questions answered, or at least a more refined view on which questions actually matter.
One of the most valuable parts of the day, previous attendees have said, is the ability to discuss issues with others in similar situations, and we are all hoping to learn from direct experience, especially in the complex world of sales and sales management.
In summary, the European Managed Services & Hosting Summit 2018 is a management-level event designed to help channel organisations identify opportunities arising from the increasing demand for managed and hosted services and to develop and strengthen partnerships. More details:
Angel Business Communications have announced the categories and entry criteria for the 2018 Datacentre Solutions Awards (DCS Awards).
The DCS Awards are designed to reward the product designers, manufacturers, suppliers and providers operating in data centre arena and are updated each year to reflect this fast moving industry. The Awards recognise the achievements of the vendors and their business partners alike and this year encompass a wider range of project, facilities and information technology award categories as well as Individual and Innovation categories, designed to address all the main areas of the datacentre market in Europe.
The DCS Awards categories provide a comprehensive range of options for organisations involved in the IT industry to participate, so you are encouraged to get your nominations made as soon as possible for the categories where you think you have achieved something outstanding or where you have a product that stands out from the rest, to be in with a chance to win one of the coveted crystal trophies.
This year’s DCS Awards continue to focus on the technologies that are the foundation of a traditional data centre, but we’ve also added a new section which focuses on Innovation with particular reference to some of the new and emerging trends and technologies that are changing the face of the data centre industry – automation, open source, the hybrid world and digitalisation. We hope that at least one of these new categories will be relevant to all companies operating in the data centre space.
The editorial staff at Angel Business Communications will validate entries and announce the final short list to be forwarded for voting by the readership of the Digitalisation World stable of publications during April and May. The winners will be announced at a gala evening on 24th May at London’s Grange St Paul’s Hotel.
The 2018 DCS Awards feature 26 categories across five groups. The Project and Product categories are open to end use implementations and services and products and solutions that have been available, i.e. shipping in Europe, before 31st December 2017. The Company nominees must have been present in the EMEA market prior to 1st June 2017. Individuals must have been employed in the EMEA region prior to 31st December 2017 and the Innovation Award nominees must have been introduced between 1st January and 31st December 2017.
Nomination is free of charge and all entries can submit up to two supporting documents to enhance the submission. The deadline for entries is : 9th March 2018.
Please visit : www.dcsawards.com for rules and entry criteria for each of the following categories:
DCS Project Awards
Data Centre Energy Efficiency Project of the Year
New Design/Build Data Centre Project of the Year
Data Centre Automation and/or Management Project of the Year
Data Centre Consolidation/Upgrade/Refresh Project of the Year
Data Centre Hybrid Infrastructure Project of the Year
DCS Product Awards
Data Centre Power product of the Year
Data Centre PDU product of the Year
Data Centre Cooling product of the Year
Data Centre Facilities Automation and Management Product of the Year
Data Centre Safety, Security & Fire Suppression Product of the Year
Data Centre Physical Connectivity Product of the Year
Data Centre ICT Storage Product of the Year
Data Centre ICT Security Product of the Year
Data Centre ICT Management Product of the Year
Data Centre ICT Networking Product of the Year
DCS Company Awards
Data Centre Hosting/co-location Supplier of the Year
Data Centre Cloud Vendor of the Year
Data Centre Facilities Vendor of the Year
Data Centre ICT Systems Vendor of the Year
Excellence in Data Centre Services Award
DCS Innovation Awards
Data Centre Automation Innovation of the Year
Data Centre IT Digitalisation Innovation of the Year
Hybrid Data Centre Innovation of the Year
Open Source Innovation of the Year
DCS Individual Awards
Data Centre Manager of the Year
Data Centre Engineer of the Year
The next Data Centre Transformation events, organised by Angel Business Communications in association with DataCentre Solutions, the Data Centre Alliance, The University of Leeds and RISE SICS North, take place on 3 July 2018 at the University of Manchester and 5 July 2018 at the University of Surrey.
For the 2018 events, we’re taking our title literally, so the focus is on each of the three strands of our title: DATA, CENTRE and TRANSFORMATION.
The DATA strand will feature two Workshops on Digital Business and Digital Skills together with a Keynote on Security. Digital transformation is the driving force in the business world right now, and the impact that this is having on the IT function and, crucially, the data centre infrastructure of organisations is something that is, perhaps, not as yet fully understood. No doubt this is in part due to the lack of digital skills available in the workplace right now – a problem which, unless addressed, urgently, will only continue to grow. As for security, hardly a day goes by without news headlines focusing on the latest, high profile data breach at some public or private organisation. Digital business offers many benefits, but it also introduces further potential security issues that need to be addressed. The Digital Business, Digital Skills and Security sessions at DTC will discuss the many issues that need to be addressed, and, hopefully, come up with some helpful solutions.
The CENTRES track features two Workshops on Energy and Hybrid DC with a Keynote on Connectivity. Energy supply and cost remains a major part of the data centre management piece, and this track will look at the technology innovations that are impacting on the supply and use of energy within the data centre. Fewer and fewer organisations have a pure-play in-house data centre real estate; most now make use of some kind of colo and/or managed services offerings. Further, the idea of one or a handful of centralised data centres is now being challenged by the emergence of edge computing. So, in-house and third party data centre facilities, combined with a mixture of centralised, regional and very local sites, makes for a very new and challenging data centre landscape. As for connectivity – feeds and speeds remain critical for many business applications, and it’s good to know what’s around the corner in this fast moving world of networks, telecoms and the like.
The TRANSFORMATION strand features Workshops on Automation and The Connected World together with a Keynote on Automation (Ai/IoT). IoT, AI, ML, RPA – automation in all its various guises is becoming an increasingly important part of the digital business world. In terms of the data centre, the challenges are twofold. How can these automation technologies best be used to improve the design, day to day running, overall management and maintenance of data centre facilities? And how will data centres need to evolve to cope with the increasingly large volumes of applications, data and new-style IT equipment that provide the foundations for this real-time, automated world? Flexibility, agility, security, reliability, resilience, speeds and feeds – they’ve never been so important!
Delegates select two 70 minute workshops to attend and take part in an interactive discussion led by an Industry Chair and featuring panellists - specialists and protagonists - in the subject. The workshops will ensure that delegates not only earn valuable CPD accreditation points but also have an open forum to speak with their peers, academics and leading vendors and suppliers.
There is also a Technical track where our Sponsors will present 15 minute technical sessions on a range of subjects. Keynote presentations in each of the themes together with plenty of networking time to catch up with old friends and make new contacts make this a must-do day in the DC event calendar. Visit the website for more information on this dynamic academic and industry collaborative information exchange.
This expanded and innovative conference programme recognises that data centres do not exist in splendid isolation, but are the foundation of today’s dynamic, digital world. Agility, mobility, scalability, reliability and accessibility are the key drivers for the enterprise as it seeks to ensure the ultimate customer experience. Data centres have a vital role to play in ensuring that the applications and support organisations can connect to their customers seamlessly – wherever and whenever they are being accessed. And that’s why our 2018 Data Centre Transformation events, Manchester and Surrey, will focus on the constantly changing demands being made on the data centre in this new, digital age, concentrating on how the data centre is evolving to meet these challenges.
Is One Better Than the Other, or Do You Need Both?
By Erik Rudin, VP of Business Development & Alliances at ScienceLogic.
For quite some time in the IT Service Management industry, agent versus agentless monitoring has been a fiercely disputed topic of conversation. Which is superior? Depending on who you ask, you were likely to get very different answers. With the rise of hybrid and multi-cloud infrastructures, we propose a combination of both is the best way to keep on top of changing IT environments.
Need more convincing? We break down more of this debate below.
Tradition Versus Easy and Fast: The Pros and Cons
The agent approach is the more traditional procedure for data gathering and involves the installation of software (agents) on all computers from which data is required.
The agentless approach is likely to be easier and faster to deploy as it leverages what’s already available on the server without installing additional software.
So, Which Is Better?
Agents are fantastic for engaging in concentrated, deep, high-fidelity monitoring. They’re also highly applicable to log management, which is an increasingly important facet of modern applications.
Agentless monitoring provides that big picture view of what’s going on in your IT world.
But, there is one factor of this debate we cannot overlook; As if finding the right approach for your environment was not hard enough, you also must consider the rise of multi-cloud.
Hybrid Cloud Infrastructures: The Complicating Factor
Today, the ingredients of a modern IT infrastructure have changed. According to ScienceLogic research, one third of enterprises have more than 25 percent of their IT in the cloud and 81 percent already combine on-premises, private, and public cloud environments. When hybrid cloud environments and configuration management databases (CMDBs) are thrown into the mix, our agent versus agentless conversation becomes much more complicated.
Manual processes have become unmanageable in a hybrid world where cloud instances are spun up and down at will—after all, there are no people in the cloud. Hybrid cloud has also disrupted CMDBs, but the CMDB hasn’t kept up in multi-cloud environments. We see more and more failed and out-of-date CMDBs in organizations every day.
Jonathan Arnold - Managing Director at Volta Datacentres, shares his predictions for the IT and datacentre industry in 2018.
Stakeholders in the IT industry – vendors, Channel, consultants, analysts and end users – must look at the political world with envy when it comes to the certainties to be encountered at Westminster and beyond! Brexit is an almost guaranteed disaster. For the Brexiteers, we’ll end up being too generous when it comes to the much talked about ‘divorce bill’, and for the Bremoaners, any deal short of the existing one we have as part of the EU will be a disaster. Add in the certainty of financial and personal life scandals, stir with the ongoing will she or won’t she remain as Prime Minster, sprinkle in a pinch of economic meltdown as the lessons of 2008 seem well and truly forgotten (apparently there’s more personal debt now than there was pre that previous crash – oh dear!), and that’s the year 2018 in politics.
Turning our attention to the world of IT and data centres, and nothing is quite so clear. Blink and you’ll miss the next technology development or business trend. Don’t believe me? Then think back over the past 10 years or so. Plenty has changed and, most notably, the pace of change has picked up dramatically, which makes planning for the future a particularly difficult task. That said, while there may be no ‘death and taxes’ certainties, there are some obvious trends that will continue, alongside the emergence of at least one new technology.
The IT and data centre industry has performed the in/out dance since its birth. One minute everything needs to be centralised. The next minute everything needs to be distributed. That model is now being replaced by what I call hybrid infrastructure – a balanced mix of centralised and distributed IT resource, housed in a mixture of large, consolidated data centre facilities and, increasingly, local, edge, micro data centres. Clearly, one size doesn’t fit all.
Similarly, there seems to be a final recognition that any organisation needs to have a mixture of in-house and external IT and data centre resources, including colo, Cloud and Managed Services. Optimum flexibility, efficiency and cost requirements dictate such a policy. So, we have the emergence of a hybrid, hybrid world. IT and data centre infrastructure owned and operated in-house, owned and operated externally from a mixture of consolidated, centralised and distributed, localised facilities.
To reflect this change, it seems reasonable to expect an organisation’s workforce expertise to be transformed - especially when it comes to the IT department. There may no longer be the need for dedicated, single discipline IT experts, as much of the required skills are being sourced externally. The data centre is finally being recognised as a business’s data/information resource, and this requires that the data centre professionals and the facilities folks bury the hatchet and work closely together. It’s been talked about for years, but it does need to happen. Add in the fact that completely non-technical staff should be enabled (within boundaries) to fire up IT resources without finding a bottleneck in the data centre or IT infrastructure, and we see the emergence of…a hybrid, almost virtualised, pool of human resource when it comes to specifying and implementing the infrastructure required to support any new application. That’s to say, personnel from all company departments need to be involved in the IT and data centre process.
Automation and orchestration are crucial to achieving all of the above. So expect big things from this management area over the next 12 months. Running a hybrid, agile and optimised data centre and IT infrastructure is no longer possible without high levels of automated management. It’s just surprising that there aren’t more comprehensive solutions out there. We seem to be stuck in the multi-console age – whereby there are far too many software programmes running small chunks of the business. That needs to change.
Technology-wise, things move fast in the IT space. Not so much in the data centre industry – where some of the established technologies are those discovered by the ancient Greeks! It may take a while for the pace of change to increase, but I’m reliably informed by several sources that foam batteries will have a major impact on the data centre space in the near future. The promise is of better safety, much higher densities, and lighter weight batteries. Foam batteries seem to be under the radar as far as Wikipedia is concerned, but there’s plenty of vendor information out there for those interested.
As the world goes wireless, Indi Sall, Technical Director, NG Bailey’s IT Services division asks if the days of the wired network are numbered.
The future is wireless: Should we still care about cables?
Compared to a few years ago, plugging things in has started to feel like a major hassle.
In the home environment the wireless smart hub has become the centre of the digital home, connecting smart TV’s, wireless speakers, wireless printers, lighting, power and heating controls, and providing a platform for voice assistants such as Siri and Alexa. This wireless environment is also becoming increasingly visible in the corporate world with networks now supporting all of the above in addition to applications such as united communications and wireless conferencing solutions etc.
Wireless networks have had such a positive impact on the user’s digital experience that it is now impossible to imagine life without them. Does this mean that the time has come to abandon wires altogether?
In terms of throughput speed and continuity – two of the pillars by which today’s corporate networks are judged – the answer should be a plain ‘no’. Here, the wired world still reigns supreme. When considering flexibility, however, cables don’t just falter, they fall by the wayside completely. Have you ever seen anyone plug a cable into their smartphone so they could get faster Internet? Me either.
The proliferation of the Internet of Things (IoT) has led to an increase in network connected devices. As sensors gradually appear in everything, from the clothes we wear to the chairs we sit on, our already-substantial reliance on wireless data is about to spike. Then there really is no going back.
Couple IoT with today’s increasing business appetites for cloud services and workforce mobility, then combine it with end-users’ insatiable consumption of streamed and social media, and it’s easy to see the growth in wireless connectivity and the need for investment in the latest wireless technologies in order to cope with demand.
Over the next three years, these forces will drive a huge upswing in demand for technologies that can optimise wireless infrastructure performance. Many owners of tenanted and high-occupancy buildings, in particular, are having to completely rethink their entire approach to ensure their estate offers state of the art digital connectivity. Not only must building owners consider WiFi technologies but they must also consider optimising the macro GSM network to ensure excellent 4G cellular coverage. Distributed antenna systems (DAS), which boost 3G and 4G network signals inside a property, are starting to gain serious traction. By enabling greater in-building availability of mobile operator services, DAS signal boosters can offset the ‘blocking effect’ that many buildings’ physical structure has on GSM signal coverage, overcoming the common ‘poor mobile signal’ problem that many occupants of large buildings experience.
WiFi is also evolving. Increasingly good quality WiFi is being seen as a utility service within public buildings such as universities, hospitals, shopping centres, hotels, transport estates and town centres. Smart public WiFi systems, like the SSE Arena in Belfast, not only transmit zone by zone high-density WiFi signals that enable entire crowds to connect with 50Mbps plus speeds, but they also offer patrons a personalized experience with the ability to digitally purchase merchandise, targeted advertising and way finding or location services. Services like these, are enabling physical venues and stores to compete with the on-line shopping experiences offered by the likes of Google and Amazon.
As the wireless radio spectrum becomes increasingly crowded, new technologies such as Li-Fi will enable new forms of data connectivity using LED light rather than radio waves, offering the potential to support even faster data speeds and capacity to support the huge expansion in the number of IoT devices, whilst providing energy efficient lighting at the same time.
Wireless gets all the attention. And with all these dazzling advances, should we still care about cables? As end-users, probably not. As guardians of the large buildings and shared spaces in tomorrow’s smart cities, however, the definitive answer has to be ‘yes’. This is because behind every headline-grabbing wireless solution is an unseen and indispensable structured cable network. These cables might pick up wireless data from miles away (with operator 3G or 4G services, for example). They might also take on your data from right beneath your feet or a few inches above your head. Either way, it takes a tremendous amount of hard cabling to backhaul today’s wireless data – something that isn’t going to change any time soon.
This is something that we must not forget. As the networks in our buildings undergo their inevitable digital upgrade, it will pay to remember that not every cabling solution is made equal. The quality of the back-end design will determine whether future wireless technologies can be integrated and how easily they can be supported. This is why the role of specialist systems integrators is so fundamental to the future success of our smart buildings. No one has the budget, let alone the desire, to rip and replace a brand-new system and, in the absence of global standards for structured cabling design, finding the right partner to architect as well as implement the network is crucial. The impact of getting the design wrong would be felt for years to come.
As everyday end-users of wireless technologies we can expect the gradual separation from the cabled world to continue. As this happens, however, maintaining an appreciation for the vital role that cabling plays in enabling today’s slick user experience couldn’t be more important. Although the number of traditional data points will fall, with a huge expansion in connected devices, the infrastructure within modern digital buildings will need to be based on the latest cable standards supporting 10GB over copper and 100Gb over fibre. Now is the time when the back-end is being overhauled and it is this work that will truly make or break the future of our digital world.
The application forms part of Schneider Electric’s EcoStruxure™ solution for data centres, and delivers detailed remote monitoring and critical information direct to the users’ mobile phone.
Kelvin Hughes is a developer and manufacturer of navigational and radar systems for civil and military applications, with a manufacturing history dating back over 250 years. The company, which was acquired by German defence contractor Hensoldt in 2017, is based in Enfield in Essex, where its corporate data centre has been located for the past five years.
The data centre and its IT equipment host all of the company's critical applications including its ERP system, development servers and data storage systems. As a Ministry of Defence (MOD) subcontractor, the company has a vital requirement for both physical and cyber security, in addition to strict access control. Business continuity and disaster recovery are also important aspects of the facility’s day-to-day operation.
“If the data centre fails, the company essentially stops trading,” said Ian Mowbray, Infrastructure Services Manager at Kelvin Hughes. “Reliability, both in terms of the data centre hardware, the IT equipment and the services supplied by Kelvin Hughes is an issue on which the company cannot afford to compromise.”
From Inception to Delivery
Kelvin Hughes’ original data centre, which was designed and built with the help of Schneider Electric, consists of 12 racks containing a mixture of physical and virtual servers and data storage arrays. However, only eight of the racks are in use today by Kelvin Hughes, with the additional four populated by another company which is collocated on the business premises.
There is scope for considerable expansion in the data centre, which would bring with it a need for additional monitoring and management of the facilities to ensure that it continues to operate effectively.
Mowbray's team comprises six people who are responsible not just for the IT equipment and helpdesk, but also for the entire building management including environmental control, access control and maintaining the water supply for the data centre cooling equipment.
In recent months, Kelvin Hughes has deployed Schneider Electric’s StruxureOn service, to help maintain its data centre operations whilst providing remote management and monitoring. Ian Mowbray explains that when built, the facility featured a contained hot aisle together with close coupled cooling equipment.
“Schneider Electric’s InfraStruxure with Hot Aisle Containment Solution (HACS) has greatly helped the efficiency and effectiveness of our data centre cooling system,” says Mowbray. “A number of the servers have been virtualized making the requirement for physical servers unpredictable. The HACS enables a high density load and the flexibility to reliably accommodate, power and cool an additional number of IT devices.”
“We used to have a monitoring server in the computer room which looked after all of the infrastructure and sent us email alerts if anything was amiss,” said Ian. “But during a recent routine upgrade of the batteries in our UPS systems, we learned about StruxureOn and that we could deploy it as part of our existing maintenance agreement with Schneider Electric.”
“Anything that provides additional insights and proactive monitoring or management of our facilities is of great interest because we’re a small team and it’s essential to know what’s happening in the data centre on a daily basis.”
StruxureOn enables data-centre managers and operators to both view and control all their equipment from a single central console, more commonly known as the “single pane of glass”.
A great benefit of the new system, according to Mowbray is that alerts to any issues that reach a certain threshold of concern can be sent directly to a duty manager via the mobile phone application. “It means we can continue to monitor the computer room remotely at weekends, and should we encounter any issues, they are delivered directly to my Smartphone.”
“The app also makes it much easier to communicate with Schneider Electric in cases where external support is required.” He continued. “As a customer, we can log a support call directly into the maintenance team via the app, meaning that we don't need to phone the helpdesk any more. We have previously encountered cases where a power supply has malfunctioned over a weekend, we've logged a call and the Schneider engineer has been on site first thing Monday morning with a replacement part.”
Resiliency is key
The data centre is designed to be ultra-resilient and as such has a lot of redundancy built into the UPS infrastructure. “Due to the fact that we’re running at around 50% of total capacity, we can get about two and a quarter hours autonomy time from the batteries at our current load. In addition, we can continue to run the water pumps and In-Row cooling units, because we have a 4000 litre water buffer located outside, which gives us substantial cooling redundancy to continue to cool the room even if we experience a power loss.”
By utilising Schneider Electric's PowerChute software, which selectively shuts down various servers in the case of a prolonged power outage, the autonomy time of the batteries can be increased even further. “We can actually squeeze about three and a half hours out of the batteries before everything stops,” says Mowbray. “In reality, mains power would typically be restored long before then. The longest power cut we have had in five years only lasted for 20 minutes.”
Mowbray says that the security of the data centre is adequately provided by the Schneider Electric Symmetra UPS systems, which remove the need for a backup generator. “We considered that option,” he says, “but given the level of risk it wasn't worth it. It's more important for us to shut down the kit safely in a controlled way than to have a generator, which may or may not even start!”
Data Centre Lifecycle Services
The service provided by Schneider Electric is essential to the effective management of the data centre's facilities. “The StruxureOn monitoring service has enabled us to extend our virtual team at literally no extra cost,” says Mowbray. “In order to deliver detailed insights and reporting at a similar level we’d need a far bigger team in place to monitor the facility 24/7.”
“From the point of view of engineering, product maintenance and lifecycle support, Schneider Electric continue to provide an excellent service to Kelvin Hughes,” he continued. “Any time we've had an issue with a piece of infrastructure equipment, they've always sent a replacement with an engineer in a timely manner, I cannot fault their service team.”
DW talks data centre power and performance issues with Leo Craig, General Manager of Riello UPS. Stopping short of the kitchen sink, Leo covers flexibility, scalability and higher efficiency, new technologies and ideas coming into the industry and the importance of great customer service.
1. Recapping on 2017, Riello UPS had some major product launches. How have these been received by the market to date?
The past year has been a particularly exciting one for us all at Riello UPS. We’ve upgraded several of our already-popular products as well as introducing a number of new models that have been extremely well-received already by the data centre sector.
In addition to our range of UPS products, we’ve also rolled out significant enhancements to our communications software and network cards, so our customers have easier access to all the business-critical information they require.
And we’ve been working closely with our customers in the data centre sector and other target industries to constantly improve our aftercare and maintenance support. We’re never a company that rests on our laurels and are always looking at how we can become even better.
2. In more detail, can you remind us about the NextEnergy product and give us some idea of how this has been received by customers?
Certainly, the NextEnergy UPS range has been extremely well-received by data centre managers looking to balance the need for efficiency alongside keeping their running costs down. The three-phase UPS’ transformerless design is capable of delivering high efficiency up to 97% in a compact yet easily maintainable footprint.
It can be connected in parallel with up to 8 units to either scale-up capacity or add redundancy and comes with Efficiency Control System functionality that optimises performance depending on the power absorbed by the load. Not only does NextEnergy minimise disruptions to the mains, it is also extremely effective at reducing harmonics generated by non-linear loads to provide a cleaner power supply.
On a similar theme, we’ve also recently launched our new Sentinel Dual, a UPS designed for maximum reliability and scalability. Available in 5-10KVA models, the Sentinel Dual can be installed as either a floor-standing tower or a rack, making it a great option in environments where space is limited, while up to three systems can be operated in parallel to deliver increased resilience.
3. And Riello added features to the Multi Power UPS product line – have your customers found these beneficial?
Since its launch our Multi Power (MPW) has proved to be particularly popular with data centre customers as it helps them tackle the key issues of scalability and UPS efficiency.
Because of its modular approach, the system is easy to upscale if and when more power is required simply by adding extra power modules and batteries. On the flip-side, to avoid their UPS being under-utilised, data centres can also easily turn modules off to improve efficiency. It’s the perfect solution for the ever-evolving data centre environment.
The MPW range has been enhanced firstly by the introduction of our Multi Power Combo, which offers 126KVA of redundant power and batteries in one single rack, making it the perfect choice for restricted spaces that require a small footprint yet maximum power density. We’ve also recently introduced a new 25KVA module option for customers that require a lower power rating than the standard 42KVA MPW.
4. Not forgetting software launches and developments, which included the PowerShield3 launch and a new iteration of the NetMan product – what new capabilities have these introduced?
Our communication software PowerShield³ has been completely overhauled over recent months to become even more user-friendly. An upgraded user interface clearly displays the UPS’ current operating state giving customers crucial details such as input voltage, applied load, and battery charge at the touch of a button.
PowerShield³ can interact seamlessly with the updated version of our communications card the NetMan 204. This latest model can be used on all ranges of equipment within the Riello UPS family that have a communications slot, including the modular MPW and the latest NextEnergy and Sentinel Dual, and it encourages greater interaction between the unit and the end-user. The card’s latest firmware includes an improved setup wizard that adds the option for extra environmental sensors that can be monitored in addition to the UPS’ general performance.
5. Riello UPS seems to be focusing on flexibility, scalability and higher efficiency when it comes to UPS. Can you talk us through these objectives?
What with smart devices and the growth of the IoT, data centres are facing huge pressures to keep up with the increasing demand for processing capacity in the short-term, but that certainly doesn’t make it any easier for them to predict the power requirements of the future.
That’s exactly why flexibility, scalability, and efficiency need to go hand-in-hand. Modular UPS systems are popular with data centres precisely because they provide the flexibility to scale-up capacity as and when the need arises, while at the same time mitigating the risk of installing an oversized UPS at the very start of the project, so balancing both the initial investment and the total cost of the project.
And efficiency will always be on the agenda, as let’s face it, no data centre manager wants an inefficient system that will require more energy, waste money and ultimately impact on their bottom line. The challenge for us as a UPS manufacturer is to develop as efficient a product across all possible load levels, and this is where modular is extremely effective as it is efficient at low loads as well as higher ones.
One concern we do have is that the current load levels required to qualify for tax relief through the Carbon Trust’s Energy Technology list (ETL) simply encourage data centres to use their UPS’ inefficiently to claim the rebate. While the scheme is supposed to encourage efficiency, in practice it’s promoting the opposite, particularly the 100% load level marker as no UPS should ever be run continuously at 100% - not only is it inefficient, it’s dangerous.
It’s the equivalent of being told you are most efficient driving a car at 120mph, when the speed limit is 70mph and in reality most people only average around 45mph.
6. And the company prides itself on the quality of its maintenance and after sales service approach?
It’s imperative our customers not only receive a premium product, but premium aftercare and customer service too. We understand UPS systems are hugely important pieces of equipment, both in terms of installation and running costs, as well as the vital role they play in providing the resilient and reliable power supply data centres require on a day-to-day basis.
Once a UPS has been installed, it’s the ongoing monitoring, maintenance, and servicing that will ensure it keeps operating at peak performance. UPS’ are complex by nature so over time breakdowns and failures are inevitable, and that’s where maintenance you can truly rely on comes into its own. It’s the all-important difference between your data centre suffering prolonged and damaging downtime or getting you back up and running as quickly as possible.
Our emergency response and fix times are guaranteed by clear and transparent agreements, and if we don’t stick to our word, we’re penalised for it. Technical support is based here in the UK so we’re just a phone-call away round-the-clock. We hold significant volumes of stock not only at our HQ but at warehouses across the country, meaning replacement parts can be delivered on-site next day or in many cases a matter of hours, wherever a customer is based.
And customers need to know that the engineers installing, servicing or repairing their UPS system know exactly what they’re doing. That’s why we introduced a Certified Engineer Programme covering both our in-house technicians and engineers from authorised UPS resellers. They’re fully-trained and competent to carry out the job, and they can prove it – all certified engineers have a unique ID which customers can check against a website.
It’s that sort of honesty and transparency which we pride ourselves on and it ensures customers have the confidence that both our products and our service are top notch.
7. Riello UPS offers UPS management via the Cloud?
We certainly do. Midway through last year we launched our dedicated Riello Connect service which enables both our maintenance team and customers to monitor their UPS performance remotely through PC, laptop, or smart phone.
It’s proved an extremely valuable service providing that extra reassurance to a customer’s maintenance plan, as it means potential problems can be identified and nipped in the bud before they have chance to develop into a more serious issue. It also encourages better, more accurate ongoing reporting.
8. Any thoughts on the move to Lithium-Ion batteries?
From our point of view, it has been a very positive development for the industry, as they offer several advantages compared to the more traditional sealed lead acid (SLA) batteries. Li-Ion batteries take up half the space a SLA does without compromising on performance as they have a much higher power density.
Not only do Li-Ion versions charge quicker, just half an hour against up to eight hours, but they can be recharged significantly more times than a SLA can, 10,000 time versus 500 times, which is obviously a huge benefit for the UPS sector.
And whereas SLA batteries perform best at temperatures of 20°C, Lithium-Ion can operate effectively at 40°C, the recommended temperature a UPS system should operate at for peak performance, meaning there’s also less requirement for costly and energy-consuming air conditioning to keep the unit and batteries cool
Even one of the perceived drawbacks of Li-Ion batteries – namely their initial cost – is far outweighed by the fact that they come with built-in monitoring features. This means there’s no need for the data centre or facility to install a separate battery monitoring system.
9. Foam batteries are being talked about as the 'next big thing'. What do they offer and when can we expect to see them in the real world?
They could offer a huge leap forward in battery capabilities and be a real game-changer across industry. 3D batteries have five times higher density than traditional 2D models, so just think about what that means. Longer life; quicker to charge; smaller and more lightweight; cheaper to manufacture and less toxic than standard Li-ion. On the face of it, what they promise sounds as if it’s almost too good to be true!
It’s still very much an emerging technology, so in the short-term foam batteries are probably only likely to be practical for smaller items like watches, wearables, or perhaps smartphones. Then I suppose it’ll just be a matter of time before it’s becomes financially viable to upscale the concept to cars and potentially wider industry use.
10. We can’t not mention the massive interest in AI and IoT right now. Firstly, how can Riello UPS leverage these technologies when it comes to designing new products and/or improving the company’s existing product portfolio?
AI and IoT obviously promise fantastic technological advances, and Riello UPS is very much leading the way in terms of embracing ‘Industry 4.0’ and ‘smart factory’ principles, such as modularity, smart grids, and two-way connectivity.
However, we also need to be mindful that deep down people like to deal with people and if you automate too much, there’s always the risk that you impact negatively on the overall customer experience.
11. Secondly, the AI and IOT explosion (with the attendant massive increase in data generation, processing, storing etc.) is placing new demands on data centre infrastructure. Does Riello UPS see this impacting on the new products it will be developing?
Even though computer technology and data centre design is constantly evolving, the fundamental requirements of a modern data centre today date back to the IBM mainframe systems in decades gone by. The centres still require a clean, reliable, resilient power supply, the only aspect that changes is how much power they want.
Our range of UPS systems covers everything from 400VA to 6.4MVA, so whatever the IT industry needs, we’ll be able to meet those requirements.
12. Any other new technologies and/or ideas that will be having an impact on the UPS market in the near future?
It will require a radical shift in mindset for many in our industry, but we genuinely hope that the sector will soon start utilising the untapped potential of UPS batteries to store energy and feed it back into the grid. Adopting such a forward-thinking approach to demand response is one way to prevent a potential capacity crisis, what with demands for electricity estimated to double over coming decades.
Earlier we were speaking about the advantages of the move to Li-Ion batteries and this is another case in point – their use opens up the possibility for UPS batteries to become a valuable source of renewable energy, helping data centres reduce their environmental impact while at the same time offering them an invaluable extra revenue stream when the power stored is sold back to the National Grid.
Of course, this is easier said than done. With mission critical businesses such as data centres, UPS resilience is and will always be the number one criteria. Even something as seemingly straightforward as harnessing power from a back-up generator hasn’t been particularly widely adopted by the industry yet, so convincing them to go several steps beyond even that and consider UPS energy storage is a big ask. But it’s up to us in the UPS industry to keep making the case, strongly stressing both the undoubted environmental and economic benefits.
13. For example, Riello UPS’ recent partnership with Audi Sport seems designed to emphasise the company’s interest in sustainable, green energy solutions?
Of course. Riello UPS is a world leader in our ongoing commitment to developing greener energy and helping to provide reliable power for a sustainable world, and teaming up with Audi Sport matches those dual ambitions. Through their Abt Schaeffler team in the FIA Formula E Championships for electric cars, Audi Sport is showcasing electricity, technology, and innovation on a global scale.
Everyone in the office is keeping a close eye on how the team gets on this season and we’re all hoping that drivers Lucas Di Grassi and Daniel Abt can get to the top of the podium wearing Riello UPS colours. It’d be particularly fitting if they could get a result at the Italian round of the championship in April, so we’ll be keeping our fingers crossed!
14. Riello UPS seems to be thriving as a company in what's a relatively static market place right now?
While the overall industry might be somewhat static, we’ve actually come off the back of a record-breaking year ourselves. As we’ve touched on earlier, our already strong range of products and services has been complemented by several new product launches and upgrades. Combine that with our unrivalled maintenance plans, exceptional in-house expert technical support and aftercare, and our significant stock of readily available spare and replacement parts, and it all adds up to a tailored UPS experience that customers will find hard to beat.
15. A large part of this success is down to Riello UPS’ enlightened approach to customers service (i.e. no voicemail in the company)?
Deep down people like to deal with people. How many times have you called a company only to be forced to press so many buttons and listen to so many automated messages before you actually get to speak to a real person? It’s not unusual that you end up being left on hold for so long you feel so frustrated and never want to deal with that company again.
This is something that we never want to happen at Riello UPS. When someone calls our office, they get to speak to a person as soon as they’ve selected which department they want. We have two simple rules that must be followed at all times. Rule one: no voicemails. Rule two: answer the phone. We enjoy speaking to our customers and providing that personal service.
Another recent development that we’re really proud of is the introduction of our Riello Hub. It’s a one-stop-shop available to all customers, resellers, consultants, and certified UPS engineers. Through the portal they can access tailored content and updates such as data sheets, drawings, order progress, firmware updates, training videos, and more. Basically everything they’ll need to make their lives easier, all available in one handy place.
16. To the extent that the customer is central to your activities, not matter what might save Riello UPS some money?
There are too many get-out clauses and ‘ifs and buts’ in UPS contracts today, particularly maintenance agreements. In our industry there are far too many instances where the contracts actually benefit the supplier more than they do the customer. Unrealistic response times, a lack of clarity over what’s covered and what isn’t, customers tied into support they might not actually need due to hugely restrictive terms and conditions with 90-day notice periods. We’ve heard some real horror stories over the years and it’s about time it stopped.
It’s unethical, it’s downright wrong, and it’s something we’re taking a stand against. We truly believe our maintenance plans are quickly becoming the ‘gold standard’ for the sector, giving customers a clear choice of packages with coverage and response times tailored to their own unique needs.
Our service level agreements (SLAs) aren’t one size fits all, they are fair, transparent, and clear – they have a real meaning. Our response and fix times are rapid but realistic. Some providers promise a four-hour response, but what does that ‘response’ actually entail? Is it simply acknowledgement of your issue, or is it an engineer on-site working on the fault?
That’s why our SLAs guarantee emergency response and even fix times depending on the support package chosen, and if we don’t live up to our promise, we’ll face a financial penalty.
There’s no auto-renewal in our contracts either, customers aren’t locked in for another year, they’re given the choice of whether they wish to continue or not. It’s all about giving control to the customer, not the supplier – that’s why our contracts will always benefit the customer before they benefit Riello UPS.
17. And the longevity of the Riello UPS staff is another major advantage?
Absolutely, there are several examples of people in our team who’ve been part of our family for 10, 20, even 30 plus years. I’ve been with Riello UPS for more than 20 years now, but even that doesn’t rank me anywhere near the top of our longest-serving league table!
Having such long-serving and dedicated staff obviously promotes stability and continuity within the business, but it provides so much more than that too. There are people here who’ve pretty much dedicated their lives to working in the UPS industry. That means we benefit from an incredible wealth of knowledge across all departments, from our certified engineers and technical staff, through to our sales and support teams, who all really know the industry inside out.
Another advantage of this longevity is that we’re able to build trusting, long-lasting relationships with many of our customers. They respect the expertise we have, and we’re able to be completely honest and upfront with them too about the best UPS products for their particular circumstances.
18. Are you able to share a customer success story with us, demonstrating how Riello UPS’ technology and service have made a difference to an end user?
In an industry such as ours, there’s often a tendency to overlook a lot of fantastic work – if a UPS is installed and working efficiently then that’s great but it’s also expected. It’s only when there’s a failure or a fault that it can become an actual issue. ‘Success’ from our perspective is where we manage to quickly turn those problems around or where we truly go beyond a customer’s expectations
One recent example would be our work with Häfele UK, the international furniture fittings manufacturer. Several short-notice power outages had caused them serious downtime in the past, with a diesel-powered backup generator their main source of power protection.
The customer required a bespoke UPS solution to minimise the risk of such failures happening again, but due to the potential risk to business continuity, mains work couldn’t be carried out during the week. The entire installation had to take place during a weekend in as short of period of time as possible.
Thanks to the huge levels of professionalism and expertise of our install team, the entire project was up and running in less than 48 hours having started on the Friday night and being successfully completed by early Sunday morning. The result? One very happy customer that has the power protection to keep their business up and running.
19. Finally, what can we expect from Riello UPS during 2018?
The coming few months are gearing up to be typically busy. Our technical and R&D teams are constantly working on upgrades to our existing range, as well as introducing some fantastic new products.
We can’t say too much yet but we’re working on some extremely exciting projects which we believe will bring huge benefits to the data centre and IT industries, as well as ‘Industry 4.0’ and smart manufacturing – it’s very much a case of watch this space!
We’ve also got a packed schedule of events coming up throughout the country, where it’s great for our team to get out and meet both existing and new customers.
20. Any other comments?
Any readers attending the upcoming Data Centre World show at London ExCel should pop over to our stand D420 and say hello. We’re announcing a very special new service at the show – we can’t give away all the details but it’s a ground-breaking offer for our industry, one with the potential to provide a much-needed shock to the power protection sector!
We’re also really looking forward to the great programme of talks and workshops being staged in the Facilities and Critical Equipment Theatre, which we are the main sponsor of.
Did you know that a basketball player’s activities can be tracked as many as 25 times per minute, from the number of times the player has touched the ball to how far he or she has travelled during the game? These days, everything is quantifiable with the right tool. Everything around us can or is being tracked, including every action within an organisation.
By Sridhar Iyenghar, Head of Europe, Zoho Corporation.
Modern working environments demand a certain level of analytical input that can produce key insights into how an organisation functions. Data is the key here, but to best utilise it, it’s important to first establish what’s relevant and useful to an individual business, and to find the time and resources to analyse and leverage this information.
The workplace of the future is continually being redefined by digital disruption. Technology is enabling businesses and their employees to be more productive than ever before. Here, we take a look at some of the key trends reshaping our working environments.
Analytics and Machine Learning
Data analytics and machine learning can support better decision making and enable an organisation to make accurate and informed predictions.
Take ‘people analytics,’ for an example. Arguably the future of business management, people analytics takes into account a company’s wealth of existing employee data — such salary history, tenure and performance information — found in different systems and links that data with the organisation’s business goals to give meaningful, actionable insights.
Analysing data regarding a workforce can help managerial staff in several ways, from staff retention – by enabling firms to analyse why employees leave and identify and promote the most appropriate team members – to recruitment, as organisations can harness the power of analytics to look deeper than resumes and CVs and thereby recruit employees who will stay longer and contribute more to the business.
People analytics enables companies to make useful predictions such as insights into which teams are performing well and who needs additional training. By further analysing data, leadership teams can identify problems in the workforce before they arise, allowing for faster and more effective solutions to be found and put into practice.
Analytics can also help organisations to anticipate times of peak demand, make systems more efficient and reduce waste while machine learning can help to identify patterns based on past behaviour or outcomes to alter business models as needed. Analytics can also be used for sentiment analysis of campaigns and for understanding purchasing patterns of customers.
What’s more, these capabilities are now open to businesses of all sizes. Data management and analytics was once associated with large enterprises; but now with the explosion of data, even small to medium sized firms hold a large amount of data that they can use to their benefit.
Automating various parts of a business can help to reduce inefficiencies and improve employee productivity, which can make an organisation more agile. Customer support, operations, IT support, sales and marketing are all areas where automation can be applied to great success. As an example, bots can be used to automate and handle repetitive administrative tasks, freeing up employees for more important work that will have strategic value to the business.
Collaboration and communication
The trend towards collaborative working goes beyond enabling mobile employees to work while away from physical workspaces. Now, employees expect to have access to virtual workspaces that support real-time communication with colleagues around the globe at a moment’s notice. This is driving the need for businesses to support not just video-conferencing but also secure collaborative document co-creation and editing.
Enterprise data often resides in different applications used by sales, marketing, finance, HR, and other departments. Organisations can generate deeper insights by connecting these systems via workflow engines, open APIs or analytical systems which can provide answers to enhancing growth. These connectors can help these disparate, heterogenous applications to work together to meet real life business requirements.
Data protection and privacy
In some cases, the digital era’s many benefits for businesses have come at the cost of data protection and privacy for users. In response, tighter incoming data protection and compliance laws will change how user data is collected, stored and processed. The European Union’s General Data Protection Regulation (GDPR), which is scheduled to go into effect in May 2018, will be a much needed step in forcing companies to seek permission from users before serving advertisements, for instance, amongst other restrictions. In recent times, some tech companies effectively established the business model that if you’re not paying for a product, then you are the product, so it comes as no surprise that governments are having to step in to protect consumer privacy. Organisations must be acutely aware of these issues as they progress their digital transformation initiatives.
From increased automation, analytics and machine learning to inter-connected apps, the technologies that we talked about above, are rapidly moving into the business arena, and will forever change the workspaces of tomorrow.
The cloud, and the need to adopt a cloud first policy, is obviously a hot topic in the IT world. The cloud promises seamless updates to newer versions of existing services, and many are positioning it as the most secure way to future-proof an organisation, particularly for small businesses.
By Dirk Paessler is CEO of Paessler, makers of PRTG Network Monitor, based in Nuremburg, Germany.
Over the past few years, SME’s have come to represent the life blood of the UK economy thanks to their inherent agility and steadily growing numbers. Small businesses accounted for 99.3% of all private sector businesses at the start of 2016 and 99.9% were small or medium-sized according to data from the Federation of Small Businesses. Because they are ‘challenger brands’ in nature, they also tend to lead the way when it comes to innovative trends, and setting the tone for established companies to follow their lead.
Paessler conducted a survey into SME attitudes towards the cloud in early 2017, which painted a clear and favourable picture for small businesses; 70% of SMEs are already using cloud technology or are set to do so in the near future. SMEs are thus “early adopters” once again, not only driving the economy forward, but also in leading the way in innovation by taking the plunge first, and in greater numbers.
According to the data, the biggest barriers to cloud adoption are fear for data security (86% believe it to be an obstacle), the incurring costs of updating entire systems (75%), and a lack of internal knowledge (60%). For SMEs in particular, these obstacles are no mean feat. The smaller the enterprise the more a challenging funding issues become. Fees that could easily be absorbed in bigger firms require careful consideration by SMEs.
But businesses are already addressing the budget concern by favouring Hybrid Cloud set-ups; that is, an IT structure which mixes workloads and services in the cloud - ‘public cloud’ - with on premise networks – ‘private cloud’ - into a mixed network. What’s more, cloud providers like Amazon and Microsoft grant you access to round-the-clock teams devoted to your activity, bolstering security. It also frees you from the time restraints of your IT team, particularly important if they don’t tend to operate outside of working hours.
The trend to move applications and data to the cloud is not just smart marketing from Amazon, Microsoft and Google; it presents clear advantages to all businesses: cost, agility, manageability and security.
Here’s more insight on why SMEs are leading the way when it comes to cloud uptake:
It saves time and money
To use a personal example, for several years Paessler has been using cloud-based CDNs to deliver their website content and downloads of their IT monitoring product, which amounts to thousands of trial and update downloads every month. Using the cloud is cheaper than relying on a data centre, faster for customers, and includes free features that would otherwise require significant resources to create and maintain.
It increases efficiency
With the cloud, businesses can address all of their IT needs remotely, and no longer depend on existing legacy infrastructures or applications. This can effectively curb costs for office space by allowing employees to work from home, and lead to a more flexible and nimble workforce.
It allows you to upscale and diversify
Changing to a ‘cloud first’ strategy gives SMEs an opportunity to reassess their IT strategies, as they decide what is worth keeping. This is particularly crucial when, for instance, upscaling quickly. Cloud computing and BYOD (‘Bring Your Own Device’) are undoubtedly here to stay, and this also means business practices are set to shift towards more global, diverse and flexible working environments.
Ultimately, the cloud will take over most of what we experience as “Internet” and “Networks”. It has already revolutionised the way IT is procured, and therefore the way business is conducted.
Being on the frontline is the key to securing a businesses’ longevity and ensuring greater productivity if it wishes to operate and thrive in the new global economy. With that in mind, no wonder SMEs are at the forefront of this trend, and it wouldn’t be surprising to see more and more small businesses move their services to the cloud in large numbers going forward.
What implications does that have for the data centre?
By Paul McEvatt, Senior Manager, Cyber Security Stategy, Fujitsu.
You know you’re facing an area of grave concern when the experts at the World Economic Forum signpost it as one of its top three most probable global risks of 2018. The threat of a cyber-attack has been put up there with extreme weather events and natural disasters as one of the events most likely to cause problems on a worldwide scale this year.
In what will be a decisive year in the eternal battle between the cyber-security industry and malignant actors, what will be the key trends? And what implications will they have for organisations and the data centre?
In the crosshairs
Cyber security has stormed its way onto the political agenda recently, as allegations of election tampering, breaches of government agencies and departments, and industrial sabotage dominated the headlines. 2018 will see investigations into the US rumble on and potentially damaging evidence emerge.
With the infamous compromise of the Democratic National Committee (DNC) being attributed to Russia by Crowdstrike, cyber-attacks have entered the political lexicon as a method of disruption and subversion.
However, malignant political actors need not only attack the institutional bodies that make up the nation state to achieve these aims. As these organisations wise up to the threat of cyber-attacks, so it will become likely that attacks against commercial entities to support political objectives will increase.
As the key infrastructure that underpins all of our digital lives, the data centre, and moreover, the data contained within the data centre, offers an appetising target to any hacker looking to cause havoc on a wide scale. All sectors should expect to face a continued threat of cyber-attack this year; it is now more critical than ever for businesses to have a robust incident response plan and react accordingly.
Back to basics
Last year’s Petya and Wannacry outbreaks exploited an SMB vulnerability to software propagation that was known months before the attack. All this vulnerability simply required was patching.
The seemingly basic nature of this incursion belies the fact that while in a perfect world we would patch whenever necessary, business reality dictates that this isn’t always desirable. For example, patching a critical vulnerability in a financial system on the day before the end of the financial year might not be an attractive prospect, for fear of breaking the system.
Patching complex data centre servers is a complex challenge, with those hosting critical platforms often presenting a wide range of challenges for senior management, particularly where downtime and SLA’s often take precedent.
Fortunately, using Cyber Threat Intelligence (CTI) as an early warning mechanism can provide guidance for customers on the vulnerabilities are most liable to exploitation and those that should be prioritised. CTI can be defined in many different ways, and it can simply refer to a threat feed, however, to encompass the full impact of cyber-attacks in this age of savvy attackers, it should express the severity of vulnerability not only as a technical risk, but also in financial, business, and indeed human terms.
Effective CTI cuts through the complexity of data centre patch management by providing strategic direction, indicating where basic defences are most needed. An example of effective CTI in a back to basics approach is the provision of a threat advisory that addresses thevulnerability months before ransomware variants began to propagate. Ransomware authors will continue to be innovative and CISO’s will worry about the next global strain, however, understanding the impact of a vulnerability and the need to patch it can be equally as important in protecting an organisations network and data.
Automate and Innovate
Data volumes are not only growing at a relentless place, but the devices and nodes through which this data is collected and processed are increasingly becoming connected. The growth of the Internet of Things will accelerate this trend. This expanded internet is blurring the lines between network perimeters and creating more data to manage, providing more angles for cyber-attack.
With more territory to cover and a more sophisticated foe to counter, security monitoring needs to innovate in order to keep up with the range of attacks facing modern day businesses. Data centre Security Operations Centres (SOC’s) need to be proactive in creating an advanced security monitoring ecosystem, and traditional technologies that use a manual approach are simply inadequate.
With an array of tools at their disposal, security professionals can look to combine automated monitoring services and the computational power of advanced analytics with their own capacity for creativity and lateral thinking.
This blended approach will come to the fore as future SOCs use artificial intelligence and machine learning and automate elements of security monitoring, freeing up valuable time for analysts to apply their skills to the most high-value problems. Moreover, these technologies can augment human’s analytical capabilities by providing them with a superior overview of the threat landscape as incidents can be automatically enriched.
As businesses accept the inevitability of some kind of cyber-attack, so there they will a renewed focus in 2018 not just on how to prevent an incident, but also how to respond to the eventuality. In a world swarming with cyber-threats, damage limitation is vital. Moreover, impending GDPR legislation will focus energy on this area, as the former mandates that a notifiable breach be reported to the Information Commissioners Office (ICO) within 72 hours.
Quantifying the speed with which they respond will be vital for organisations looking to optimise their damage prevention strategies, and 2018 will see many adopting and using Mean Time To Respond (MTTR) as the key metric. Alarmingly, a FireEye study looking at EMEIA organisations found that the average Mean Time to Dwell (MTTD) - the time between compromise and detection - was 489 days.
For organisations, the longer the MTTD, the more time for malignant actors to do damage across a wider breadth of customers using their servers. This makes it a key metric from a commercial perspective, as a means to minimise the disruption caused by the inevitable cyber-attack.
With the acquisition of threat hunting vendor, SQRRL by Amazon and Google’s parent company – Alphabet resulting in the launch of Chronicle, a new cyber security independent business, it’s clear organisations are really start to take cyber-security seriously. As data centres and hosting providers continue to face attacks by the most enterprising and ambitious of hackers, they’ll need to be innovative and dedicate both technological resources and brainpower to protecting themselves.
How different industries can utilise the diversity of hybrid cloud by David Fearne, Technical Director at Arrow ECS.
Hybrid cloud is continuing to grow. According to IDC, hybrid cloud architectures will be adopted by more than 80 per cent of enterprise IT organisations by the end of 2017. And, it’s no surprise it’s on the rise, a hybrid cloud utilises the best of both public and private clouds and presents a unique third option.
No two hybrid cloud setups are the same. It’s a combination that allows organisations to build a custom solution that best fits a business’ needs. Just like hybrid cars that use electricity and petrol when power is low, hybrid cloud brings two technologies together to complement each other; allowing IT teams to use on-premise systems for some tasks and cloud for others. Hybrid cloud solutions enable workloads to exist on either a vendor-run public cloud or a customer-run private cloud - and means businesses are able to harness the control of a private cloud as well as the flexibility of public cloud services.
As hybrid cloud covers such a diverse range of options it will be used differently across industries. So how can enterprises take advantage in various ways? First, let’s take a look at the different benefits.
Flexible and scalable
Hybrid cloud enables a modular approach to service delivery, giving organisations the chance to explore integrating a variety of pre-existing cloud applications, platform-as-a-service’ (PaaS) or cloud computing environments to find the optimum cloud solution; providing maximum flexibility and allowing business’s to go to market quicker.
Hybrid cloud solutions can also be scaled up and down depending on organisational workloads. This is particularly valuable for dynamic workloads; when work surges, a hybrid environment allows for quick scalability in order to meet these needs. When the demand drops, workloads can then be scaled back to avoid over-provisioning and keep costs under control.
There’s a perception that cloud isn’t as secure as on-premises infrastructure, however, with a hybrid cloud architecture, confidential information can be stored on a private cloud while the public cloud is still leveraged. Also, security policies can be applied consistently across the infrastructure– ensuring that wherever workloads reside they have the appropriate level of protection.
Quick and cost effective
Hybrid cloud can be fast to scale out and does not require the all of capital investment in infrastructure of a private cloud. Due to the quantity of cloud resource available in a public infrastructure being ‘practically’ unlimited it’s no longer a case of trying to determine the maximum load; a hybrid solution allows for allocation and reallocation of resources to meet changing workload needs as and when required.
Hybrid cloud does require certain architectural considerations to be taken into account during its design; latency, differing network architectures between different clouds, API integrations to SaaS platforms ect. These are no longer seen to be the blocker they used to be to adoption, this is partly due to the enhancement in the solutions that enable hybrid cloud but also in the perceived business benefits of hybrid.
Hybrid cloud provides value to companies of all sizes; allowing the opportunity to innovate an existing business service, migrate existing solutions to a cloud environment to reduce cost and gain increased agility or implement new business models and processes at high velocity and lower costs.
Start-ups can forego the time and expense of setting up a traditional data centre while smaller companies can leverage cloud services to bring sophisticated enterprise-level security, backup, and redundancy capabilities to their solutions – capabilities they might not have been able to afford with their limited IT resources. Meanwhile, larger and more established enterprises can move workloads into the cloud in order to free up IT resources for more innovative, strategic projects rather than the operation and maintenance of infrastructure.
So is your business ready to go hybrid?
Although there are many diverse uses and applications of hybrid cloud, transitioning from a traditional IT setup to a hybrid model isn’t straight forward. Hybrid cloud is all about mixing deployments to create a flexible system, which is inherently complicated.
Depending on the current IT structure and specific business goals, organisations might need external help setting up and customising the new hybrid cloud solution - and once it’s up and running, support with maintenance and development. Recognising the need to adopt cloud technologies alongside legacy infrastructure as a means of working smarter is one thing, but selecting the right technology and approach will be how a significant boost in productivity, innovation and the bottom line is generated.
The changing cloud landscape.
By Susan Bowen, VP & GM, EMEA, Cogeco Peer 1.
Emergence and development of the Cloud
The concept of cloud computing emerged in the late 1970’s when virtualization as pioneered by the likes of XenServer and VMWare, and sharing of server resources was first invented. This developed further in the 1900’s due to the need to deliver enterprise applications that ran “on the web” and didn’t require complex installs. The idea was that features could be rolled out faster through back-end upgrades to all users of the service at the same time. Salesforce, Oracle and NetSuite were the initial pioneers who saw success in this, and it is now dominated by Amazon. The uncertainty this emerging model brought was all around the fear of where the data is stored, who has access to the data, how business continuity could be managed and guaranteed and whether these platforms would even survive long enough to be trusted and relied upon to run the business.
Improvements in networking technology have led to wider adoption and trust of cloud computing. The first wave – to virtualise the server estate – has largely been adopted by every enterprise, which lead to a move (prevalent in 2012) whereby ambitious CIO’s started to move everything out onto the public cloud. Furthermore, the cloud business model has evolved into three categories, each with varying entry costs and advantages: Infrastructure-as-a-service, Software-as-a-service, and Platform-as-a-service.
2017 “Cloud Hype”
When a new technology comes along it’s natural to think it will solve everything until you find out it doesn’t, in fact what ‘new’ boils down to is something complimentary to what you already have. However, a major challenge during this “hype phase” was around setting expectations – the false perception that moving to the cloud will be the cheaper solution. This led to poor design choices due to a lack of understanding and investment in crafting a cloud adoption strategy.
Several enterprises who have adopted ‘all in with the cloud’ mantras and have subsequently pulled back on it and whilst many organisations we talk to have ambitions to move significant parts of their estate to the cloud, few have hit these goals yet and private cloud, colocation and hosted services remain firmly part of their mix.
The public cloud is great for spikey web applications, but for solid state line of business applications that need to integrate into all your other systems, you are probably looking at something in-house or hosted. We hear numerous comments about the cost of SaaS which has been used quite cynically by some of the big four vendors to increase revenue, and also that lock-in is a huge issue with IaaS.
Cloud as a business tool
vAs “cloud” was demystified and better understood, the value proposition for IT & business leaders shifted from price-point discussion to business outcome discussions, i.e., what do our customers expect from us, and how do we leverage the power of the cloud to innovate and exceed those expectations. As mentioned above, we are seeing enterprises looking at it application by application; what works best where, what are its performance, or network requirements, what is our appetite for risk, is this cloud vendor a competitor, or is it likely to become one? The agility and flexibility of the cloud definitely makes it a powerful business tool whereby control, visibility and management are crucial to harness the potential of cloud.
Cloud is being leveraged to provide innovation in an instant – having capabilities available when required, instead of the traditional model of spending capital to purchase and wait for infrastructure to be built prior to the using it to service customers. Cloud can be private, or public, with private being better suited for predictable workloads that have peaks and troughs, and a business model around consumption economics. Ask yourself this: How long is your IT department taking to provision and provide a VM from when the request was created by your development or business team? Run the metrics and you’ll see the business case for cloud.
Cloud in 2018? The predictions for the next 5-10 years
Firstly, cloud wars are over: The future is hybrid and multi-cloud. Hybrid solutions should mean, workloads automatically move to the most optimised and cost-effective environment, based on performance needs, security, data residency, application workload characteristics, end user demand and traffic. This could be the public cloud, or private cloud, or on premise, or a mix of all.
There is an application architecture refactoring that is required to make this happen. Application architecture will shift from classic 3-tiered to event driven. Cloud providers are therefore pushing for these features to be widely adopted for cloud-native applications. As enterprises evolve to adapt hyper-scale public clouds, cloud vendors are adapting to reach back into on premise datacentres.
As enterprises evolve to adapt hyper-scale public clouds, Cloud partnering with VMWare, and Microsoft launching Azure Stack, will burst into the public cloud. This trend will continue over the next 5-10 years with the consumption nature of the cloud growing in uptake.
DCS talks to Paul Johnson, Data Center Segment Leader, ABB about the company’s involvement in the data centre space, based on the three-pillar strategy: Intelligent Grid Connection, Elastic Critical Infrastructure and Deep Component Visibility.
1. Please can you provide a brief overview of ABB’s involvement in the data centre industry to date?
We all rely on a data center somewhere in the world. Data centers are a fundamental part of every facet of life and to make data centers work they need data along with reliable, efficient and safe power.
ABB provides the solutions to deliver and manage power at scale for the data centers of today and tomorrow. Our pedigree and industrial heritage in delivering mission critical applications enables us to provide innovative, scalable, secure power and automation solutions for the sector. Our data center solutions and integrated systems are designed for heavy-duty applications and we offer technologies for cloud, colocation, telecommunications and financial services customers.
ABB is known the world over and a key focus for us is to grow our profile in the data center segment towards our digital offering in terms of our automation and how managers and operators can take advantage of, and harness collectible data.
2. In terms of ABB’s data centre technology portfolio, where is your DC focus? Intelligent grid connection? Electrification solutions? Industrial automation and DCIM? Construction components? Safety and security solutions? And ABB Ability?
ABB’s focus is based around its three-pillar strategy: Intelligent Grid Connection, Elastic Critical Infrastructure and Deep Component Visibility. We provide all the elements of critical power from the high voltage grid connection right down to miniature circuit breakers at the IT servers.
We do not adopt a one-solution-fits-all approach. One of ABB´s core strengths is that we start by talking and “walking” the customers through the complete data center to understand what they need at the grid connection level through to the rack.
It is essential that today’s data centers and our major system solutions are designed with elasticity in mind, from conception to design, enabling the infrastructure to shrink and grow with the load. This delivers a more efficient system for the customer and prevents the need to carry any unused capacity. Our technology is completely scalable, secure and technically smart.
This smart ‘digitalization’ of the power train means operators can do more with less because better control leads to better utilization, with maximum uptime and informed maintenance decisions.
ABB is a market leader in providing technology that is inherently safe. Elements like arc flash detection, ultra-fast earthing, safe touch panel designs and lightening protection help data center operators ensure their employees and equipment are safe and there is no interruption in the flow of data.
As dependence on data centers grow, so too does the need for protection against cyber security threats within the data center itself. At ABB, we know that no single solution can keep increasingly interconnected systems secure. Our defense-in-depth approach is built around multiple security layers, which detect and deter threats across the full spectrum of data center solutions. Again, smart digitalization will be essential to proactively monitor and control the system.
3. What is the Ability Electrical Distribution Control System?
ABB Ability™ EDCS monitors power usage and power flow in the electrical system. It can be used to control generation from alternate sources, monitor day to day trends in energy usage and identify potential energy savings.
All of this information is accessed via a secure cloud platform, which allows more than one site to be managed at the same time. Although not a full-scale PMS, in the terms of a traditional data center application, it can provide a solution for small data centers, especially multi-site application.
4. What is “ABB Ability Data Center Automation”?
Many data center operators demand more and more intelligent and connected solutions, ensuring uptime, predictive maintenance and optimized utilization.
ABB Ability™ for Data Centres is our unified, cross-industry digital capability, from device to edge to cloud. It combines deep domain expertise with unmatched experience in connectivity with 70m digitally enabled devices, 70,000 digital control systems and 6,000 enterprise software solutions.
The platform is very flexible and scalable as it derives from our 40 years’ experience in digital technology, industrial process and automation knowledge.
It gives the possibility of converging both IT (with partners) and OT through a single pane of glass to give a full site overview, at multiple locations, and importantly delivers greater control. Allowing customers to know more, do more, do better, together.
With features such as advance power analytics, intelligent alarm and event handling, data center managers and operators can have access to best-practice benchmarking at industry-level and visibility on cross-industry data, combined with greater transparency into operations from device to enterprise level. This improves operational performance efficiency and productivity through enhanced uptime, speed and yield.
5. And the ABB transformers?
ABB is the world’s largest transformer manufacturer. The ABB Transformer business has some interesting products for the data center environment, specifically our TVRT (Transient Voltage Resistant Transformers). This transformer has integrated surge protection included in the winding design, allowing transients from vacuum circuit breakers to be dealt with far more effectively than external surge arrestor systems, which are fitted to mitigate failure of traditional cast resin transformers.
In addition, ABB offers a complete range of power and distribution transformers designed to grant the reliability, durability and efficiency required in utility, industrial and commercial applications. ABB is a major transformer manufacturer throughout the world and offers both liquid-filled and dry-type transformers as well as services for complete life-cycle support, including replacement parts and components.
6. And the ABB Digital Substation?
The Digital Substation brings many advantages, it is based around the international standard of IEC61850 and allows us to remove much of the traditional copper wire interfacing. This increases safety, as we can detect pilot wire and communication failures that were not possible with a hard-wired solution. Also, this brings the additional benefit of reduced installation and commissioning times.
7. And how does the ABB Data Center offering help your customers address some of the current buzz issues, such as Cloud?
The ABB offering focuses on providing security by increasing reliability and data analytics to help operators and data center managers make informed decisions. But we have also focused on design scalability and offering an elastic infrastructure to give data centers a more pay as you grow offering.
At present, we are seeing a trend amongst customers who work with Hyperscale type solutions. They are requesting more industrial and pre-engineered solutions such as e-houses and modular build substations, which are constructed and tested in a controlled environment and reduce onsite activities.
This is exactly where ABB comes into play with our industrial background working with utilities, pharmaceutical and oil and gas companies.
8. And the focus on Edge computing?
We are paying more and more attention to Edge and we believe it will be essential to support initiatives like Industry 4.0 and the IoT era. Most recently we announced a collaboration between ABB, HPE and Rittal, to deliver a secure edge data center offering. This will be an off-the-shelf ready IT data center solution for industrial customers enabling real-time insight and action. It is specifically designed to run in rough and remote industrial environments, bringing enterprise-grade IT capacity closer to the point of data collection and machine control.
This solution will generate actionable insights from vast amounts of industrial data to increase the efficiency and flexibility of their operations and create competitive advantage.
9. And the emerging use of BIM and VR?
BIM integration is a topic we see on a daily basis. The end goal would be to have true integration between the design model and the operational side of the data center. This would involve not only using BIM as the operation and maintenance manual for the site, but also integrating the automation platform. Coupled with VR, this would be a powerful tool for “hands off” operation. Long term, AI will also play a crucial role in this area.
10. Could you share a few thoughts on how you see data centres and their various infrastructure components developing over the next few years?
The increase in data and more connected devices will drive the requirement for larger storage capacity, but also lower latency edge systems. The sheer scale of storage required, the analytics carried out or the number of edge data centers will drive the requirements for standardization and greater simplicity.
11. How do you see ABB’s role in the data centre industry evolving over the next few years?
We will continue to play a key role in shaping the future of the industry and developing intelligent solutions that respond to our customers’ needs.
Intelligent grid connection, rapid provisioning of critical infrastructure capacity and integrated communications, sensing and intelligence will be key themes for the future data center.
The data center of tomorrow will also need to harness renewable power sources and contribute to the stability of power supply for its neighbors. It will need to match customer demand, which means it will only run the data center when required and make changes without downtime.
As such, ABB will continue to invest and develop solutions to help our customers scale with predictable cost and performance. We will continue to work with pre-engineered and factory tested solutions that make it easier and faster for our customers to commission and install at site.
12. More specifically, what new data centre-focused products and technologies can we expect to see from ABB during 2018?
We have several product focuses throughout 2018 to enhance the design and functionality of the next generation data center. New products will include:
Unigear Digital - MV Switchgear which replaces traditional instrumentation transformers with sensors delivering increasing reliability and reducing complications in the design phase.
Smissline PowerBar 250 – An enhanced version of our plug-in MCB system now rated to 250A allowing flexible PDU designs.
TruONE ATS – New innovation in ATS design with fully integrated intelligent control and monitoring in a new switch design.
Ekip-UP – A retrofit kit to allow the advanced communication and protection features of Emax2 to be fitted to any breaker, new or old. This gives the ability to increase the life of older assets and enhance their connectivity and features.
13. Finally, what one or two pieces of advice would you give to data centre operators/end users as they seek to develop modern data centre facilities?
Pay attention to what we can learn from other industries such as Food and Beverage, which have controls in place to take information from many sensors and use that information to adjust process and automatically monitor plant condition for optimal performance and minimal downtime. In the DC sector we can do much more on automation and digital efficiency as an industry, there are still too many human-errors involved in the operations.
Current information security infrastructures that are designed to serve the needs of legacy datacenters have proven too rigid and static for the needs of digital businesses, which are increasingly adopting next-generation datacenters built around hyperconvergence, hybrid infrastructures and software-defined management.
By Liviu Arsene, Senior e-Threat Analyst at Bitdefender.
Consequently, as data centers increasingly focus on high performance, low cost and high availability, security needs to enable businesses to not only prevent the new wave of advanced threats and challenges that were previously non-existent, but also drive business performance and be fully integrated with digital data centers and infrastructures.
Since performance is at the heart of data center transformation, security needs to serve as an immune system that can adapt to new environments and fight off any hostile condition. With everything becoming software-defined, security needs to seamlessly integrate with virtual and physical workloads and infrastructures, without hindering performance, but at the same time boosting the overall security posture and visibility of the organisation.
Perimeters, segmentation and data protection can all be software-defined to fully integrate with hyperconverged or software-defined datacenters, allowing IT admins and SOCs to focus on security policies applicable across the entire infrastructure and on responding to security incidents that threaten the security posture of the organisation. Legacy security solutions are ill-equipped to fight the rapidly changing threat environment, and they’re not designed to offer single-pane-of-glass visibility into the entire infrastructure – regardless whether that infrastructure is physical, virtual or hybrid.
Software-defined security is all about security policies and infrastructure management across any type of infrastructure, ensuring full compliance with all virtualization technologies while consolidating security layers into lightweight security agents that can automatically “morph” based on the workloads specifications, function and location. These next-gen endpoint protection platforms should be built for much more than just to satisfy the growth needs of digital business. They should also be built to not compromise security or performance of SDDCs (software-defined data center).
Hypervisor diversification is also a key factor that needs to be considered when deploying security controls, as security integration and management must be not just platform-agnostic, but also hypervisor-agnostic. Other security layers designed to proactively defend against increased threat sophistication and diversification are tightly merged with specific hypervisors. For instance, hypervisor-based memory introspection technologies that scan raw memory of virtual workloads for memory manipulation techniques associated with APTs (advanced persistent threats) and zero-day vulnerabilities are tightly integrated with specific hypervisors.
Visibility and Incident Response
While security technologies and control are vital in the overall security posture of the organisation, visibility and incident response technologies are equally – sometimes even more – important in reducing response time in event of a security breach. Since security is also about proactively and reactively investigating security events and building better incident-response plans, complex EDR solutions have emerged to fill in the visibility gap.
Aimed at offering large organizations full visibility into indicators of compromise, they’re deployment as an extra endpoint agent, sometimes increases performance overhead and can lead to alert fatigue as IT and security are often bombarded with both relevant and non-relevant potential IoCs (indicators of compromise). The optimum EDR technology that ensures high visibility and incident response should be tightly integrated with the EPP solution, while also filtering out only relevant events in a single management console. Since datacenter transformation is all about visibility and seamless management, EPP and EDR should not only be tightly integrated, but also share information with each other and help security and IT teams have a complete security overview of their entire infrastructure.
Set to become a key security technology by 2020, according to Gartner, it’s also going to help give organisations better visibility into advanced threats. If properly designed to funnel only relevant and important security events, it can become a powerful weapon against sophisticated and targeted threats gunning for organisations.
Security Should Be an Enabler
Digital businesses aiming to adopt SDDCs, hyperconverged infrastructures and the hybrid cloud should carefully consider their security options and make sure that the main benefits of those infrastructures are not hindered or overshadowed by ill-equipped security solutions. Relying on a security solution that was built from the ground up to support the digitalisation needs of companies, without compromising security, should help any IT or security manager with gaining complete visibility into his entire infrastructure, and with maintaining the overall performance while having a potent security stack that can protect against advanced and sophisticated threats.
TCN Data Hotels is now well prepared for the future, thanks to a new Vertiv UPS system.
TCN Data Hotels runs two data centres in the north of the Netherlands, each with a capacity of approximately 50 megawatts (MW). The company has been in the data centre and colocation market since 2001 and has developed long lasting relationships with national and international clients. Not only does TCN manage its own data centres, but it also shares its knowledge and experience by developing facilities for third parties, ranging from 1 MW to over 100 MW.
In recent years, the demand for state-of-the-art data centres has increased by more than five hundred percent and the market is still growing. TCN Data Hotels taps into this market using its extensive knowledge of data centre design and business management. This results in extremely reliable data centres with optimised total cost of ownership. TCN Data Hotel Groningen is the only provider in the Netherlands to reach 99.9999 availability in the past eighteen years. The TCN Data Hotels teams provide security, services and support 24 hours a day, 7 days a week. By providing internal training programs and allowing staff to grow alongside the company, TCN ensures the highest levels of expertise and commitment.
TCN Data Hotels sets itself apart through low costs, economies of scale, energy efficiency, high availability, a secure and dedicated environment and good physical and virtual accessibility, anywhere, anytime.
For eighteen years, TCN Data Hotels has been fully satisfied with the Vertiv UPS systems providing its uninterruptible power supply. An uninterruptible power supply is vital in order to be able to offer customers the best service possible. "We serve a wide range of clients, mainly government organisations, educational bodies and internet businesses," says Kees Loer, who is responsible for engineering at TCN Data Hotels. "There's a reason why we call ourselves a ‘data hotel’. Our clients rent the space and we provide fireproofing, electricity and cooling." TCN Data Hotels raises the bar for uptime as it is the only company in the Netherlands to achieve six-nines availability in the past eighteen years. In all these years Vertiv UPS systems have never let TCN down. "But after almost two decades it was time to start thinking about replacing them," says Loer. "The principle of the UPS hasn't changed much in that time, but of course we could benefit from the latest developments in technology."
TCN Data Hotels opened a tender to replace their UPS systems and invited various suppliers to submit bids. Loer explains: "Reliability, excellent service and support, knowledge of what we do and energy efficiency were important criteria for us. In principle, these factors were more important than the cost. Reliability is essentially our core business because that's what we deliver to our customers. All our equipment must, therefore, be of the highest quality so that we can guarantee the best availability for our clients."
After thorough market research, TCN Data Hotels chose Vertiv Liebert® EXL S1 systems. These UPS stood out for their transformer-free topology with up to 97 percent efficiency in double conversion mode. They are extremely compact and also grant reduced installation costs. Lower cooling requirements and a variety of other factors allow clients to optimise the PUE of their data centres by using these systems.
Loer comments: "We were convinced of the quality and reliability of these systems also because we saw how they performed during stress tests at the Customer Experience Centre in Bologna, Italy. Moreover, they are incredibly efficient even at partial load, which is the most common situation in a data centre. Energy efficiency is important for us as, after all, every kilowatt counts." For Loer, the compact size of the UPS is also an advantage, as the new system is smaller than the old one, meaning that there is no need to make any major changes to the data centre and additional floor space is now available. The replacement will take place in phases, with the first aisle being replaced in 2017 and the others to follow. TCN Data Hotels teams are working closely with Vertiv to completely replace the system without any downtime.
Loer expects that the new UPS system will allow TCN Data Hotels to further improve energy efficiency. "We've been working on this for several years, and we will continue to do so. We're now well-prepared for the future as the new systems offer state-of-the-art technology, which is beneficial for reliability and availability. The smaller footprint of the Liebert EXL S1 system is also an advantage. The systems fit into the space we already have and we only need to make a few changes to the existing cabling."
Advances in health care and improvements to standards of living have led to a significant rise in our population. This in turn has led to the growth of our cities – with people heavily concentrated in urban areas. With this trend only set to increase, the traditional way in which our cities have been run, relying heavily on fossil fuels and other depleting resources, is no longer a viable option. Creating smart cities with the implementation of technology and digital systems will be key to future progress.
By Dennis O’Sullivan, EMEA Data Centre Segment Manager, Eaton.
A vision of the future
Many countries around the world have already started to use technology to create sustainable cities. From the US to Asia to Europe, renewable energy and technology are coming together as smart and efficient urban centres are built.
As one of the greenest cities in Europe, Amsterdam provides the perfect example of how cities can use technology to create and design a carbon-neutral space. Currently 58% of people living in Amsterdam complete their journey by bike. The Dutch capital has also put into place a Smart Mobility plan which includes initiatives fuelled by real-time data, allowing people to find the shortest route or ride sharing options. Furthermore the business district of Zuidas has been built strategically next to Schiphol Airport. Only six minutes away by train, this improved connectivity demonstrates what the sustainable infrastructure of the future could look like.
It is however Asia which is leading the way in the development of the pioneering infrastructures that will become the building blocks of our cities. Home to over 5.5 million people, Singapore has implemented a number of innovative solutions to the problem of accelerating urbanisation. From water management to eco-friendly transportation systems, the country and its citizens have made a clear commitment to green living. One of the most notable ways that Singapore is reducing its carbon footprint is with its Zero Energy Building. Solar power is used to run the building and enough is created that no further power source is required.
It is however Asia which is leading the way in the development of the pioneering infrastructures that will become the building blocks of our cities. Home to over 5.5 million people, Singapore has implemented a number of innovative solutions to the problem of accelerating urbanisation. From water management to eco-friendly transportation systems, the country and its citizens have made a clear commitment to green living. One of the most notable ways that Singapore is reducing its carbon footprint is with its Zero Energy Building. Solar power is used to run the building and enough is created that no further power source is required.
Solar, wind or storage?
Smart cities will require more power as digital systems are connected to every aspect of city life. This transition to a smart city society will see a huge increase in the amount of data that is transmitted between devices.
As we look to move to a low carbon economy, a greater adoption of renewables onto the grid will be critical. So the obvious choice for powering these green cities would be clean and carbon-neutral energy. However, the electricity generated from solar and wind is typically hard to store and is often generated at times when it may not be needed. Faced with this challenge, data centres may be the key to shaping change in the energy market.
Today, data centres consume approximately 3% of the global electricity supply, which is set to double every four years. When it comes to managing and running a data centre, energy use is one of the largest single cost items. The problem arises as the power supply often exceeds the demand. To effectively manage the ever-changing power flux, industrial-grade storage could be employed to store unused electricity. This could then be fed back into the grid when demand is higher at peak times. Not only will this enable smart cities to function as planned, but it could also secure a nation’s energy supply, making it both consistent and sustainable.
The falling cost of battery storage creates a timely opportunity for data centres to invest in this technology and become the power sources of the future. Having the capacity to store energy transforms the data centre into an energy hub that can provide electricity for nearby homes, heat offices and light up commercial buildings.
In addition to energy storage, careful data centre management will be critical. Having the latest data centre technology will ensure the transition from an energy-devouring resource to a fundamental energy asset is as smooth as possible. Technology can also be implemented to ensure that the grid and energy cycles are managed correctly and can continue to be managed effectively if scaled up in future.
The huge quantity of data produced by the growing number of connected devices is prompting large companies to build data centres in rural locations. Innovative new data centres are already being constructed in parts of Europe. Sweden, for example, is home to the first 100 per cent hydro-powered data centre. The centre is situated next to the Lule älv river and is fed directly with power from the neighbouring hydropower station. Clearly this level of inventiveness is a step in the right direction – but this is only the beginning.
Building smart cities will be key to a sustainable future. But without energy storage, smart cities would remain a purely hypothetical concept. Converting data centres into energy stores will be the pivotal point in creating smart and energy efficient urban centres.
Today's IT environments are more varied and complex than ever before. As a result, data centre performance issues—whether they’re related to applications, compute, network, storage, virtualisation, web, or the cloud—are becoming increasingly difficult to resolve by Kong Yang, Head Geek™, SolarWinds.
Troubleshooting is one of the most important skills for IT professionals, and yet it is becoming more complex and challenging. So how can IT professionals overcome this complexity?
Troubleshooting in today’s IT world
In IT, troubleshooting is the act of pinpointing the root cause of an issue within the data centre. Identifying the source of the problem means that IT professionals can understand the underlying cause and effect of the incident. With this information, IT professionals can then work to ensure that similar issues do not arise in the future to further impact data centre performance.
In traditional IT environments, where servers are hosted on-premises, IT professionals have abided by the following eight steps when troubleshooting:
Today, however, technologies such as cloud, virtualisation, hybrid IT, and hyper-converged infrastructure have fundamentally transformed IT environments. As a result, troubleshooting across these distributed systems is now more critical, and far more intricate, than ever before.
Although the eight steps remain relevant within new IT environments, IT professionals now lack the visibility and time to perform them all for each and every issue. Working alongside cloud service providers (CSPs) means that IT professionals have difficulty accessing certain areas of infrastructure that are now hosted off-premises. And with much more of an IT professional’s time committed to learning their new environments, issues are instead being remedied as fast as possible, without the necessary root-cause analysis to prevent future problems arising.
Finding the value in correlation
With troubleshooting becoming far more complex, IT professionals can find value in correlation as an additional method to help identify the root cause of an issue.
Correlation is the act of exploring certain connected variables within the data centre, to see if they have any impact on the cause of a performance issue or incident. IT professionals can use correlation to track and compare a variety of key metrics, such as network performance counters and application performance counters, over a period of time to pinpoint the cause of an issue, and then remediate it. For example, correlating network latency and bandwidth data with virtual machine compute utilisation and application specific-data can help identify the cause of an issue relating to distributed application performance.
With the correct method and the right tools, correlation can be used to make troubleshooting a far more efficient and effective process. This approach will not guarantee that you immediately arrive at the root cause of an issue, but it will make the process of getting there much quicker. Here are some tips to help ensure that your correlating approach is as accurate as possible:
Troubleshooting to meet today’s requirements
Troubleshooting may no longer be as simple as it once was, but that does not necessarily mean that it has lost any of its effectiveness. It just needs to be slightly adapted.
By implementing the best practices outlined above, correlation can become an integral part to the troubleshooting process within distributed environments. This approach will go a long way in ensuring that IT professionals have the time and efficiency to successfully maintain system uptime, and avoid unnecessary hurdles.