The recent Gatwick/Vodafone IT crash – for which I’ve heard conflicting views as to what and who exactly caused the problem (!) – put disaster recovery firmly back in the spotlight. In simple terms, disaster recovery is a glorified insurance policy and, as such, individual companies have to choose what level of cover they a) want and b) can afford, in the same way that we all choose whether to take out home and contents, life, health, car, redundancy, pet and various other insurance policies. Only the very wealthy – whether companies or individuals – can probably afford every type of insurance policy out there, or the very best, most resilient disaster recovery plan. So, choices have to be made as to the type and level of cover that is essential.
In the case of Gatwick, it could be argued that, despite the negative fallout from the IT failure, the airport is unlikely to lose much in the way of business – after all, the UK only has so many international airports, and very few which offer the quantity of flights and destinations which Gatwick does. So, in many cases, customers cannot choose to fly from another airport, and for the carriers, the capacity which Gatwick offers is just not available at virtually any other UK airport.
All of which leads me to suggest that, if the passenger information display boards fail, and re replaced by written details, does it really matter?!
The alternative is that Gatwick spends millions (which it seems it chose not to originally) on ensuring the complete resilience of its IT infrastructure, such that there is no possibility of a single point of failure existing anywhere whatsoever.
In conclusion, as I’ve written before, there is a type and size of organisation which, although I am sure it does everything it can within reason to ensure a high level of customer service, doesn’t actually have to worry that much if customers feel let down, as these customers have very few, if any, alternatives.
Several years ago, I remember reading an interview with the head of one of the budget airlines where he, more or less, stated, ‘if you’re paying such a low price to fly to Nice and back, can you really complain that much if your luggage doesn’t make it?’! And he’s right. In some industries, where competition is high, customer service, underpinned by reliable, resilient IT, is massively important. In others, where true competition is thin on the ground, good customer service is a nice to have, but not an essential.
So, whisper it quietly, when it comes to (re-)assessing your disaster recovery plan, take a minute to understand what might go wrong, what impact this will have on your customers, and then decide how much it’s worth spending on what kind of a solution – and you might just find that you don’t need to spend that much money at all…
Survey unveils huge differences between knowing how to be digitally effective versus how UK businesses are approaching digital development.
New research has highlighted that as businesses strive to meet the needs of today’s digitally empowered consumer there is a gap between businesses knowing what drives digital effectiveness and current digital practices.
The findings were part of a larger study exploring the opinions and digital practices of UK businesses. It also compared the views of the top performing businesses with those of their mainstream counterparts.1
In the report,
It was quite a different story when the survey asked the same respondents about their actual digital practices.
While 97% believe that user-centricity leads to better outcomes, overall, only 65% said that they put user needs at the heart of their development. Comparing top performing businesses and the mainstream, three-quarters of top performing businesses say they put the user needs at the heart of their development but only half of the mainstream group say the same.
It was a similar result looking at ‘belief’ and ‘practices’ regarding cross-disciplinary teams with just over 70% of all respondents implementing this approach (despite nearly all saying this is necessary for success). Again top-performing companies are slightly more advanced towards multi-disciplinary teams in comparison to their mainstream counterparts (81% versus 66%).
While 96% of the company respondents agree team learning and reflection is a necessity for digital transformation, half (52%) of organisations don’t take time out to learn and reflect. A mere 40% of mainstream businesses said that their teams take time out to learn and reflect on what they are doing.
And looking at practices around long term versus short term goals; while a massive 92% say businesses that concentrate on the long term are likely to be more successful, in practice 53% actually admit to focusing on short-term targets. Only 35% of the mainstream group said that they focus more on long-term goals than short-term targets.
Agility is also cited as a best practice (88% of all companies think agile approaches are more likely to be more successful than traditional ones) but overall only 58% adopt agile methods; this figure dropped to less than half (46%) of mainstream companies and only 58% focus on outcomes in development and are happy to change requirements if needed. 70% of top performing businesses adopt agile processes and three-quarters of the top performing businesses say that they focus on outcomes in development and are happy to change requirements if needed.
Other key findings in the survey include:
The survey was commissioned by digital agency Code Computerlove. CEO of Code Computerlove, Tony Foggett says, “While our survey was primarily designed to show levels awareness around Product Thinking, a mindset and approach to digital product development, it was striking to see the difference between what companies think will help them drive digital effectiveness and what is happening within their businesses.
“There is clear consensus that digital approaches need to be customer-centric, agile and data-driven, all principals that fall into the Product Thinking mindset that we implement. But all business – whether they are out performing their competition or not – seem a way off actually truly embracing these techniques.
“The survey also highlighted that while most companies agree what the basic principals of a digitally transformed business are, it is the EXECUTION that makes the difference between best performers and the rest. The gap between thinking and doing is much smaller for top-performing companies than it is for their mainstream counterparts.
“So what’s stopping businesses from applying a Product Thinking approach as a means to get ahead? One theory is the degree to which they can, and have, changed their business culture. As businesses move from the traditional ways of working to the more agile/adaptable approaches required for the 21st century they come up against deeply ingrained structural and cultural barriers.
“While agreement with the principals of a digital-first organisation is all but unanimous, the majority of businesses still have cultures that can best be described as retrograde.”
Foggett added, “Ultimately ‘Product Thinking’ is a cultural model that pertains not only to a mindset (a collective drive for effectiveness by continually growing value for the customer and company) but also the methodologies, roles and organisational design required to see the approach through.
“While it’s impossible to prove that it’s the implementation of these Product Thinking ideas that is responsible for ‘success’ in the top performing businesses the correlation is extremely strong. In organisational culture, management approach, development philosophy, every area of business practice we looked at showed the firms who acted on the product philosophy were more likely to be successful.”
Based on interviews with 150 IT decision-makers across the UK – as part of an EMEA-wide study – the research gleaned insights as to organisation’s approaches to mobility and digital transformation; the drivers and adoption of accessories; the impact of these within the workplace; and their approach to accessories moving forward.
It found a huge 94 per cent of UK organisations are either planning, about to start or currently undergoing some form of digital transformation. A further 5 per cent claim to have already ‘completed’ digital transformation, which includes employers upgrading the devices provided to their workforce to enable greater levels of productivity, collaboration and agility, as well as improving talent attraction and retention.
While nearly every organisation across the globe is trying to digitally transform, many projects are stalling due to ‘digital deadlock’ – a term coined by IDC, which describes the blockers restricting employers from managing change within their business. Ensuring individuals are educated, engaged and know how to correctly deploy technologies is fundamental to the success of any digital transformation project, according to IDC.
Digital transformation projects are stalling because organisations are failing to manage change and the impact such change will have on their employees. Nathan Budd, Senior Consulting Manager at IDC added, "For organisations serious about delivering transformed working environments, agility, productivity and innovation, it’s time to invest in the right tools for the job. This requires a fundamental shift in the way in which leaders introduce new technology, the way they define customer experience, as well as the way in which they engage employees and stakeholders.”
Tackling the productivity challenge
With organisations always looking for new ways to boost employee productivity levels – 58 per cent admitted to changing their working environment in an effort to do so – it is not surprising workplace accessories are becoming a key productivity driver. In fact, many found that inappropriate, or a lack of, accessories significantly harmed employee productivity.
There is also a stark contrast in the value that businesses that have completed digital transformation projects place on the role of accessories – such as premium bags and cases, docking stations, privacy screens and cable locks – in driving productivity, compared to those that have not yet undergone digital transformation.
Marcus Harvey, Regional Director of Commercial Business EMEA at Targus said, “The findings of this study showcase a clear relationship between getting the right workplace tools in place and improved productivity and employee engagement.
“Yet, getting to this point requires leadership and team consultation. Despite technology being the driver behind digital transformation, people are the true agents of change, so making sure they are engaged and bought into the vision behind such moves is critical to improving the working environment,” he added.
Alongside this, measurement of helpdesk inquiries shows employees reporting higher levels of inquiries or complaints regarding power requirements, peripherals, missing accessories, and damaged devices due to lack of appropriate cases.
However, while four in five (77 per cent) IT managers admitted to receiving complaints relating to missing or unavailable accessories, what makes an interesting observation is that fewer issues are reported to IT than to line managers. These findings suggest IT departments, which could be responsible for accessory deployments, may be in some way detached from the experiences of those using them.
Facilitating collaboration and new ways of working
Three in five (61 per cent) organisations claimed to be making changes to the accessories they offer in order to facilitate greater collaboration, as well as more than half (59 per cent) seeking to adapt to new technology changes.
As an increasing number of employees are now required to work remotely – three in five (61 per cent) claimed their staff have roles that involve some form of travel, and around a quarter (23 per cent) managing more flexible, non-deskbound roles – mobility has changed the way staff achieve results. With many individuals now relying on accessories to create the same working experience they would get when working from the office - when working remotely or from home – it is essential employers are providing the right tools and technologies to make this possible.
“Working outside the office has never been easier and the very concept of work is ever changing,” Harvey added. “Businesses must ensure their accessories support innovation and productivity wherever their teams are working – not only enhancing the quality of work but also attracting and retaining the best talent.”
Providing the best tools for the best people
With technology transcending geographical location, there have never been more opportunities for talented workers to jump ship and move onto alternative employers. Understanding the importance of retaining valuable members of staff, a third (33 per cent) of organisations are changing their working environments to retain talent, while the same number of respondents see the benefits of mobile working environments for talent retention.
A recent article by Steve Gillaspy of Intel outlined many of the challenges faced by those responsible for designing, operating, and sustaining the IT and physical support infrastructure found in today's data centers. This paper targets four of the five macro trends discussed by Gillaspy, how they influence the decision making processes of data center managers, and the role that power infrastructure plays in mitigating the effects of the following trends.
Lastline Inc has published the results of a survey conducted at Infosecurity Europe 2018, which suggests that 45 percent of infosec professionals reuse passwords across multiple user accounts – a basic piece of online hygiene that the infosec community has been attempting to educate the general public about for the best part of a decade.
The research also suggested that 20 percent of security professionals surveyed had used unprotected public WiFi in the past, and 47 percent would be cautious about buying the latest gadgets due to security concerns. “The fact that elements of the security community are not listening to their own advice around security best practices and setting a good example is somewhat worrying,” said Andy Norton, director of threat intelligence at Lastline.
“Breaches are a fact of life for both businesses and individuals now, and reusing passwords across multiple accounts makes it much easier for malicious actors to compromise additional accounts, including access to corporate data, to steal confidential or personal information. The attendees at Infosecurity Europe should be significantly more aware of these issues than the average consumer.”
The survey also identified a shift in the changing attitudes of the security community to cryptocurrency. Similarly to the study conducted at RSA, 20 percent of survey respondents suggested they would take their salary in cryptocurrency, with a whopping 92 percent of individuals suggesting that they have used cryptocurrency to purchase gift cards.
“Cryptocurrencies are emerging from the murky waters they have occupied since conception, and are getting closer and closer to financial legitimacy,” continued Norton. “In the next few years we are likely to see some of the more mainstream currencies such as Bitcoin being accepted by large online retailers, and generally moving away from their current association with instability and criminality.”
New research has revealed that the term ‘cloud’ could be obsolete as soon as 2025. The cloud has become so deeply embedded in business performance that the word itself may soon no longer be required, with a quarter (26 per cent) of IT decision makers in the UK believing that we won’t be talking about ‘cloud’ by the end of 2025.
Commissioned by Citrix and carried out by Censuswide, the research quizzed 750 IT decision makers in companies with 250 or more employees across the UK to pinpoint the current state of cloud adoption and the future of cloud in the enterprise. A separate survey of 1,000 young people aged 12-15 was run in conjunction with this research to identify the next generation’s perspective on cloud, outlining the future workforce’s awareness of and interactions with cloud technology.
The research offers a snapshot of current cloud strategies in the UK and the extent to which the term ‘cloud’ is already dying out as it becomes firmly embedded in business processes.
2025: The cloud dissipates
A quarter (26 per cent) of IT decision makers in the UK believe the term ‘cloud’ will be obsolete by 2025. Of those who don’t see a future for the term, more than half (56 per cent) believe that cloud technology will be so embedded in the enterprise that it will no longer be seen as a separate term. One third (33 per cent) believe that employees will refer to cloud-native apps, such as Salesforce, specifically – with no consideration of where the data is hosted in future.
The possible extinction of the term ‘cloud’ is also mirrored in the next generation of workers due to start entering the workforce in 2021. Three in ten (30 per cent) 12-15 year olds in the UK don’t know what the term ‘cloud’ means while one third (33 per cent) never use the term outside of ICT classes at school. While they may not regularly use the term, the benefits of the cloud are already deeply embedded in teenagers’ lives. When asked what the cloud meant to them, 83 per cent of 12-15 year olds in the UK recognised that it was where they stored their photos and music while two fifths (42 per cent) confirmed that they used the cloud to share data, such as photos, music and documents for schoolwork, with friends.
Current state of cloud adoption
Nearly two fifths (38 per cent) of large businesses in the UK currently store more than half of their data in the cloud. Yet, almost three in five (59 per cent) still also access and manage data on premises. The majority (89 per cent) of large UK organisations agreed that cloud is important to their business. For most large UK companies (87 per cent), improving productivity is a key benefit of cloud adoption.
Further education about the benefits of the cloud is required in the enterprise, particularly at a board level. Over half (52 per cent) of UK IT decisions makers thought middle management within their organisation had a good understanding of cloud, while the figure stood at 39 per cent for board members. As cloud adoption continues within the business, this lack of awareness at board level could become indicative of the fact that the term ‘cloud’ is dying out and neither management nor the C-suite now need to know the ‘ins and outs’ of the technology.
Despite this relatively low level of understanding, businesses are serious about the cloud – 91 per cent have implemented a cloud strategy or plan to put one in place imminently. However, these strategies are in their infancy. Only 37 per cent said this plan was “incredibly detailed” and aligned to business objectives.
When it comes to public and hybrid cloud, security concerns still exist. Three in 10 (31 per cent) UK IT decision makers are not confident that a public cloud set-up is able to handle their organisation’s data securely. This figure stands at 19 per cent for hybrid cloud set-ups.
Large organisations in the UK are most confident when it comes to private cloud – 88 per cent are quite confident or highly confident that this cloud set-up can handle its data securely. In light of this, it is not surprising that private cloud is the most prevalent model – used by 61 per cent of large UK businesses. Around one third of UK-based organisations also use public cloud (36 per cent) and one quarter (25 per cent) have implemented a hybrid cloud model.
Darren Fields, Regional Director, UK & Ireland at Citrix, said:
“Much like BYOD before it, this research indicates that cloud as a term may soon have had its day and be relegated to the buzzword graveyard. This has nothing to do with its relevance in the IT industry but everything to do with the evolution of technology and the ubiquity of cloud services to underpin future ways of working.
“Most IT budget-holders agree that cloud can improve productivity, lower costs, ensure security and optimise performance, as part of a digital transformation agenda. However, there is still more education required to effectively communicate the benefits of cloud services – and there’s still a gap to be bridged between boardrooms and IT decision-makers in relation to this.
“Arguably a level of mistrust and misunderstanding still holds back UK businesses. And it is clear a cultural and educational shake-up is needed for cloud and digital transformation to deliver on its potential. Once this awareness stems from IT to the board and beyond, there should be fewer barriers to hold cloud adoption back.”
85% of businesses agree IT is restricting their potential.
A lack of choice and flexibility in IT infrastructure is holding back UK businesses according to a new study by Cogeco Peer 1, a global provider of enterprise IT products and services, with 85% of respondents believing that their organisation would see faster business growth if its IT vendors were less restrictive.
The study, which questioned 150 IT decision-makers across several different industries including financial services, retail, higher education, business services and media, found that CIOs and IT Directors are frustrated with the restrictions they encounter, which manifests in a lack of flexibility (51%) and reliability (50%), as well as 40% of respondents citing too much choice from IT vendors, which their organisation finds restrictive. The vast majority (84%) of respondents stated that their organisation is not currently running the optimum IT system.
Almost seven in ten (69%) felt that their organisation’s growth/development has been restricted by its IT vendors’ contracts, with around two in ten (17%) reporting significant restrictions.
When an organisation adopts new technology, it is clear it doesn’t always go to plan. Most respondents (81%) said that an IT service or system they’ve adopted has not lived up to expectations and almost half (48%) stated this has happened on multiple occasions. The most common impacts of this have been reliability issues (65%), not getting the service required (57%) and higher costs (52%).
When it comes to upgrades, it is a similar story, with 75% of respondents reporting that an IT upgrade purchased by their organisation has not lived up to expectations, with 41% saying this has happened multiple times. Among the reasons as to why this is, is a system not integrating well with existing systems (63%), the technology was too immature or unproven (45%) or the vendor did not add the expected value (24%).
Furthermore, business agility was highlighted as a key area where IT vendors could drastically improve the value of their service. Three in five (60%) respondents agreed that their organisation’s IT vendor could do more to help their business to be more agile, with just over a fifth (21%) stating that IT vendors do not help their business to be agile at all.
Susan Bowen (current VP & GM, EMEA) future President of Cogeco Peer 1 said: “It is clear that the technology industry is key to helping businesses in every sector and specialism grow and to reach their full potential.
“Agility and flexibility are key tenets of this, so businesses should seek the right partners and services which enable them to scale up and down to match seamlessly with their needs.
“Far from being restrictive, properly scalable solutions can allow businesses to focus on what they do best, rather than being bogged down in their system requirements.”
There is plenty of scope for IT providers to adapt to their customer’s changing needs, with the survey concluding that almost all respondents (91%) felt that there are areas within their own organisations that are too complicated, with the most common areas being integration (55%), security (41%) and data protection (37%).
The overlooked element of DevOps.
Many words have been written about the need for automation, for DevOps, for Continuous Delivery, and for all the other buzzwords that people use when they talk about evolving IT and managing change. Yet research suggests that few have paid enough attention to why change fails – and that's a failure to engage the people, rather than the technology.
In some respects, IT infrastructure has changed rapidly over the last 5 years – in particular, virtualisation and Cloud have become widely accepted, almost standard approaches. Yet significant numbers of organisations are still struggling to get to grips with how these are changing their data centres. Another major change beginning to impact data centres is the adoption of DevOps. But while DevOps and Continuous Delivery garner considerable media attention, the results of a survey (LINK: https://freeformdynamics.com/it-risk-management/managing-software-exposure/] recently carried out by Freeform Dynamics shows they are still relatively young in their widespread adoption in the enterprise (Figure 1).
As DevOps usage grows and organisations seek greater flexibility from their IT systems, it brings with it the need to automate a considerable number of processes that were formerly labour intensive. These include the provisioning of infrastructure resources to services as they are deployed, potentially re-provisioning them far more frequently than in the past and, perhaps the most overlooked element of all, de-provisioning them perhaps after just a few days or weeks of use. Of course, all of these processes must be managed with a strong focus on security.
These new requirements are already presenting challenges in the data centre as shown in the results of another survey [LINK:https://freeformdynamics.com/software-delivery/it-ops-as-a-digital-business-enabler/
], where the flexibility required of IT resources is highlighting the weakness in many existing monitoring tools, especially when the impact of Cloud usage is considered (Figure 2).
With so much change happening, how can data centre managers and IT professionals ensure everything runs smoothly? Clearly attention, and investment, are required to bring monitoring up to scratch in the dynamic new world. The same can be said for the management and security tools used to control the data centre’s IT infrastructure.
But most importantly of all, it is people and process matters that require immediate attention. In many, if not most organisations with large IT teams, it has long been recognised that the days of having narrow specialist teams looking after particular technology silos were coming to an end in order to allow staff to be able handle workloads. But the results of the survey mentioned earlier in Figure 1 tell us that getting different parts of IT working together well is a challenge for many (Figure 3).
The survey returns show that a clear majority of IT teams or groups have experienced trust issues with other teams. This is despite almost every respondent acknowledging that it is critically important for different groups to work together to meet existing challenges, especially in terms of security. There are many possible reasons for the lack of trust, sometimes it might simply be down to history or personal issues. But it is also very likely that many organisations do not have good processes in place to enable people to work effectively.
It could be that Operations and Developers rarely speak or meet, that Operations and Security teams only get in touch with each other when something has ‘gone wrong’, or simply that Security professionals and Developers only communicate in terms of commands. In the dim and distant past when IT systems were relatively static, communications may not have needed to be open all of the time, but even then, there was a need to believe that what one group told another could be trusted as a base from which to act. In the far more dynamic environments that are being built today, good communications and trust are even more crucial if serious problems are to be avoided.
In essence “feedback” must be at the heart of any dynamic system, and the DevOps approach to creating, modifying and running software and systems is a case in point. Operations needs to understand the type of infrastructure that developers require in order to deploy their systems, and they have to put in place the monitoring tools to keep it functioning effectively and to highlight any issues that need to be addressed. Data Centre Ops staff then must be able to feed back the results of the monitoring and management of the systems they run to allow developers to fix bugs, optimise code that isn’t running well or that is consuming more physical resources than expected.
These people skills and processes are often overlooked but are becoming increasingly important. The benefits can be difficult to measure exactly, but the consequences of getting them wrong will become visible almost immediately. Until these people and process issues are tackled, problems will continue (Figure 4).
The Bottom Line
Complexity in the data centre continues to increase. To keep everything operational needs not just good technology but processes that have been modernised to handle the challenges of today, not those of a decade ago. Communication, both verbal and electronic, is essential to improving IT service delivery and avoiding failures and outages. But improving how Ops speaks with Devs, developers speak with ops and security speaks with everyone can also allow each team to have a positive influence. People and communications are easy to overlook. Don’t.
At present, data centres (DCs) are one of the major energy consumers and source of CO2 emissions globally.
By Professor Habin Lee, Brunel Business School, Brunel University London
At present, data centres (DCs) are one of the major energy consumers and source of CO2 emissions globally. The GREENDC project (http://www.greendc.eu/) addresses this growing challenge by developing and exploiting a novel approach to forecasting energy demands. The project bring together five leading academic and industrial partners with the overall aim of reducing energy consumption and CO2 emissions in specific national DCs. It implement a total of 163 person-months staff and knowledge exchanges between industry and academic partners. More specifically, knowledge of data centres operations is transferred from industry to academic partners, whereas simulation based optimization for best practice of energy demand control will be transmitted from academia to industry through the knowledge transfer scheme.
In recent years, the use of the information and communications technologies (ICT), comprising communication devices and/or applications, DCs, internet infrastructure, mobile devices, computer and network hardware and software and so on, has increased rapidly. Internet service providers such as Amazon, Google, Yahoo, etc., representing the largest stakeholders in the IT sector, constructed a large number of geographically distributed internet data centers (IDCs) to satisfy the growing demand and providing reliable low latency services to customers. These IDCs have a large number of servers, large-scale storage units, networking equipment, infrastructure, etc., to distribute power and provide cooling. The number of IDCs owned by the leading IT companies is drastically increasing every year. As a result, the number of servers needed has reached an astonishing number. According to the European Commission’s JRC (Joint Research Centre) report, IDCs is expected to consume more than 100 TWh by 2020.
Due to large amount of energy implied and the related cost, IDCs can make a significant contribution to the energy efficiency by reducing energy consumption and power management of IT ecosystems. This is why most researchers focus on reducing power consumption of IDCs. Those efforts include designing innovative architecture of DCs to minimise loss of cool airs, heats from IT devices, protecting from outside heats and so on. Also, computer scientists developed energy efficient workload algorithms to minimise overloads of servers to minimise energy consumption and heat generation.
However, efforts to reduce energy consumption via efficient DC operation are still scarce. For example, DC managers are yet to find answers to such questions as what are the optimal temperature of the DCs to minimize energy consumption without affecting the performance of IT devices and meeting the service-level-agreement? How many servers or virtual machines need to be on for next 24 hours or a week considering expected workloads? What are the optimal schedules of servers and VMs for handling workloads which are various time to time? Where are the best operational schedules of cooling devices within the DC to have maximum cooling effect?
The GREENDC project tries to find answers to those questions for DC managers via a multi-disciplinary study. It takes a more holistic view by considering the system as a whole i.e. servers, cooling system, backup power and electrical distribution system. Particularly, a decision support system (DSS) that integrates functionalities including workload and energy forecasting, generation of optimal operation scheduling of cooling and IT devices and simulation for impact analysis of DC operation strategies.
The architecture of the GREENDC DSS takes layered architecture to guarantee maximum level of independence of components in different layers. This allows the DSS easily be customised for the different requirements of various types of DCs in different regions. There are four layers: data, mathematical model, business logic and user interface layer.
Data layer contains components that collect energy, workload, and meteorological data from target DCs. Collected data is processed by a data normalization component that converts different format of data into the standard format of the GREENDC to be stored in a data warehouse. The normalized data is consumed by upper layer components including math model and business logic layers. Math model layer contains utility components for forecasting and optimization functionalities of the GREENDC DSS and mainly used for the components in the business logic layer. Business logic layer components provide the main services of the DSS by using the components in lower layers. Those services include monitoring, estimation, optimization, and simulation of energy and workloads data. The components in the business logic layer also respond to the requests from user interface layer to get requests and respond to those requests.
As shown in Figure 1, GREENDC DSS provides a dashboard style user interface to DC managers for easy use of the system and compliant with majority energy management tools.
 GREENDC project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 734273.
The GREENDC project is implemented by five experts in the field: Brunel University London (UK), Gebze Technical University (Turkey), Turksat (Turkey), LKKE (UK) and David Holding (Bulgaria). In particular, the GREENDC DSS will be tested through two field trials in Turksat and Davind Holdings respectively. Turksat is operating one of the largest data centre in Turkey to provide eGovernment services. It also hosts large number of IT servers for municipalities in Turkey for public service provision. The GREENDC DSS will be tested using the real data obtained from Turksat’s data centre.
Professor Habin Lee is chair in data analytics and operations management at Brunel Business School, Brunel University London. His research interests include energy data analytics, sustainable logistics and supply chain management, and cooperation mechanisms for operations management. He is currently coordinating the GREENDC project (Jan 2017 – Dec 2020).
By Steve Hone
DCA CEO and Cofounder, The DCA
The DCA summer edition of the DCA Journal focuses on Research and Development. It’s been proven time and time again that research leads to increased knowledge which in turn helps us to develop and innovate. There is no doubt that an investment needs to be made to see the benefits and competitive advantage this can deliver both to businesses and the sector as a whole. This continued investment was plain to see again in Manchester at the annual DCA conference in July.
The unique workshop format of the Data Centre Transformation conference allowed both DCA members and stakeholders the opportunity to come together and openly discuss the key issues effecting the health and sustainability of our growing sector.
The quality of content and healthy debate which took place in all sessions was testament to just how well run the workshops were. I would like to say a big thank you to all the chairs, workshop sponsors, the organising committee and Angel Business Communications all of whom worked hard to ensure the sessions were interactive, lively and educational.
The workshop topics covered subject matter from across the entire DC sector, however research and development continued to feature strongly in many of the sessions which is not surprising given the speed of change we are having to contend with as the demand for digital services continues to grow.
All the delegates I spoke to gained a huge amount from the day, and it was clear that the insight gained from speaking with fellow colleagues together with the knowledge and experience shared during the workshops and keynotes was invaluable.
Although some of us will be slipping away to recharge batteries; there is still time to submit articles for the DCA Journal. Every month the DCA provides members with the opportunity to provide thought leadership content for major publications.
On the morning of the 25th September The DCA is hosting a series of Special Interest Group (SIG) Meetings followed by the DCA Annual Members Meeting in the afternoon at Imperial College London, all are welcome! More details can be found in the DCA event calendar www.dca-global.org/events should you wish to register your attendance.
Please forward all articles to Amanda McFarlane (email@example.com) and please do call if you have any questions 0845 873 4587.
The need to deploy new IT resources quickly and cost-effectively, whether as upgrades to existing facilities or in newly built installations, is a continuing challenge faced by todays data-centre operators. Among the trends developing to address itare convergence, hyperconvergence and prefabrication, all of which are assisted by an increasing focus on modular construction across all product items necessary in a data centre.
The modular approach enables products from different vendors, and those performing different IT functions, to be racked and stacked according to compatible industry standards and deployed with the minimum of integration effort.
Modularity is not just confined to the component level. Larger data centres, including colocation and hyperscale facilities, will often deploy greater volumes of IT using groups, or even roomfuls, of racks at a time. Increasingly these are deployed as pods, or standardised units of IT racks in a row, or pair of rows, that share common infrastructure elements including UPS, power distribution units, network routers and cooling solutions in the form of air-handling or containment systems.
An iteration of the pod approach is the IT Pod Frame which further streamlines and simplifies the task of deploying large amounts of IT quickly and cost-effectively. A Pod Frame is a free-standing support structure that acts as a mounting unit for pod-level deployments and as a docking point for the IT racks. It addresses some of the challenges facing the installation of IT equipment, especially those that facilitate the services typically requiring modifications to the building that houses the data room.
A Pod Frame, for instance, greatly reduces the need to mount or install ducting for power lines and network cables to the ceiling, under a raised floor, or directly to the racks themselves. Analytical studies show that using a Pod Frame can produce significant savings1, both in terms of Capital Expenditure, in some cases up to 15%,and reduced speed of deployment.
Air-containment systems can be assembled directly on to an IT Pod Frame. If deploying a pod without such a frame, panels in the air-containment system have to be unscrewed and pulled away before a rack can be removed. Use of an IT Pod Frame, therefore makes the task of inserting and removing racks faster, easier and less prone to error.
In the case of a colocation facility, where the hosting company tends not to own its tenants IT, it allows all of the cooling infrastructure to be installed before the rack components arrive. It also enables tenants to rack and stack their IT gear before delivery and then place it inside the rack with the minimum of integration effort.
IT Pod Frames have overhead supports built into the frame, or the option to add such supports later, which hold power and network cabling, bus-way systems or cooling ducts. This capability eliminates most of, if not all of the construction required to build such facilities into the fabric of the building itself. This greatly reduces the time taken to provide the necessary supporting infrastructure for IT equipment.
They also allow greater flexibility in the choice between a hard or raised floor for a data centre, for example, ducting for cables and cooling can be mounted on the frame, a raised floor is not necessary. If, however, a raised floor is preferred for distributing cold air then the fact that network and power cables can be mounted on the frame, rather than obstructing the cooling ducts makes the use of under floor cooling more efficient. It also removes the need for building cutouts and brush strips that are necessary when running cables under the floor, thereby saving both time and construction costs.
An example of an IT Pod Frame is Schneider Electric's new HyperPod solution, which is designed to offer flexibility to data centre operators. Its base frame is a freestanding steel structure that is easy to assemble and available in two different heights, whilst its aisle length is adjustable and can support multi-pod configurations.
It comes with mounting attachments to allow air containment to be assembled directly on to the frame, and has simple bi-parting doors, dropout roof options and rack height adapters to maintain containment for partially filled pods.
Several options are available for distributing power to racks inside the IT pod, including integrating panel boards, hanging busway or row-based power distribution units (PDUs). The HyperPod can also be used in hot or cold aisle cooling configurations and has an optional horizontal duct riser to allow a horizontal duct to be mounted on top of the pod. Vertical ducts can also be accommodated.
Analytical studies based on standard Schneider Electric reference designs provide an overview of the available savings in both time and costs that can be achieved using a Pod Frame. Taking the example of a 1.3MW IT load distributed across nine IT pods, each containing 24 racks a comparison was made between rolling out the racks using an IT Pod Frame as opposed to a traditional deployment.
CAPEX Costs were reduced by 15% when the IT Pod Frame was used. These were achieved in a number of ways. Ceiling construction costs were reduced by eliminating the need for a grid system to supply cabling to individual pods. All that was needed was a main data cabling trunk line down the centre of the room with the IT Pod Frame used to distribute cables to the individual racks.
A shorter raised floor with no cutouts was possible with the IT Pod Frame. The power cables were distributed overhead on cantilevers attached to the frame, therefore no cables were needed under the floor. Further cost savings were achieved by using low-cost power panels attached directly to the frame instead of the traditional approach of using PDUs or remote power panels (RPPs) located on the data centre floor. This not only save material and labour cost but also reduced footprint to free up data centre space for a more productive use.
The time to deployment using an IT Pod Frame was 21% less when compared with traditional methods. This was mainly achieved through the reduced requirement for building work, namely ceiling grid installations, under-floor cutouts and the installation of under-floor power cables. Assembly of the air containment system was also much faster using a Pod Frame due to the components being assembled directly on to the frame.
In conclusion, using an IT Pod Frame such as Schneider Electric's HyperPod can produce significant cost savings when rolling out new IT resource in a data centre.
Building on the modular approach to assembly commonly found in modern data centre designs, the Frame simplifies the provision of power, cooling and networking infrastructure. Thereby reducing materials and labour costs to make the deployment process far quicker, easier and less prone to human-error.
White Paper #223, ͚Analysis of How Data Center Pod Frames Reduce Cost and Accelerate IT Rack Deployments ͚, can be downloaded by visiting http://www.apc.com/uk/en/prod_docs/results.cfm?DocType=White+Paper&query_type=99&keyword=&wpnum=263
1 Statistics from Schneider Electric White Paper #263, ‘Analysis of How Data Center Pod Frames Reduce Cost and Accelerate IT Rack Deployments ‘.
By Robbert Hoeffnagel, European Representative for Open Compute Project (OCP)
The modern and often called “digital enterprise” is dependent on a constant flow of new business propositions to keep up with the competition. These initiatives are more and more based on a never-ending stream of technical innovations. Don’t expect traditional hardware suppliers tobe able to keep up with the speed a modern enterprise likes and - tobe honest - needs. Only open source hardware projects like the Open Compute Project (OCP) where hundreds of companies - both vendors, system integrators and users - work together tocome up with the smart innovations that a modern data centre needs. Why? Here are three reasons why open source is the future of data centre hardware.
But first: what exactly is open source hardware? This is what we can learn from Wikipedia:
Open source hardware (OSH) consists of physical artifacts of technology designed and offered by the open design movement. Both free and open-source software (FOSS) and open source hardware are created by this open source culture movement and apply a like concept to a variety of components. Itis sometimes, thus, referred toas FOSH (free and open source hardware). The term usually means that information about the hardware iseasily discerned so that others can make it– coupling it closely to the maker movement. Hardware design (i.e. mechanical drawings, schematics, bills of material, PCB layout data, HDL source code and integrated circuit layout data), in addition to the software that drives the hardware, are all released under free/libre terms. The original sharer gains feedback and potentially improvements on the design from the FOSH community.
Although open source hardware may not beas well-known as open source software and its most successful project - Linux - we can hardly call the open hardware movement new. In1997 Bruce Perens was already working on a project he called the Open Hardware Certification Program. And popular open source hardware projects like Arduino aren’t exactly brand-new either.
Where the OHCP program in itself was not very successful it did give birth to the idea ofanopen hardware movement consisting of a variety of sometimes competing companies and other parties that were able to work closely together and come up with tools and methods to help guarantee that their products where interoperable. The Arduino board - originally designed by the Interaction Design Institute Ivrea in Italy - has proven that an open hardware product can be very disruptive and can even become one of the major building blocks of a completely new trend: in this case the maker movement.
Open source in the data centre
In other words: open source hardware has already proven tobe very important. But canitalso be important to the data centre?
That isan interesting question. Because on the one hand we see a lot of innovation and change happening in the data centre - mostly at the IT layer - but on the other hand there also is a sometimes very traditional approach. When we look at how enterprise data centres and traditional colocation facilities have been built and in many cases are still being built wedo not see a lot of innovation. Yes we try to squeeze more hardware capacity into a rack, but the way we design the overall data centre infrastructure is often very much the same as we did it 5 to10or even 20 years ago.
Facility and IT
No doubt one of the stumbling blocks here is the infamous gap between facility and IT. Itishardly an exaggeration to state that the facility guys come up with a building, put in power, cooling and some form of structured cabling and order a certain number of racks. Then they call the IT department and tell them that there is new data centre capacity available and if they could please fill this up with the IT hardware they need to run their applications.
That approach worked well as long as organizations felt IT was just another service supporting the business. No one really gave any thought to squeezing the last bit ofperformance out of those systems. Nor did they try to reduce the energy bill, because that was not where IT could make a difference or impact.
Things started to change with the hyperscalers and other large-scale usersof data centres and IT hardware. All of a sudden IT was no longer a support service and thus a cost centre. To them digital processes and consequently ITare the business.
These companies were investing billions of dollars in data centre capacity. And soon they realized that the off-the-shelf hardware - and software as well, by the way - they would buy from the same vendors everyone else was getting their hardware from simply wasn’t goodenough. They were never optimized for specific tasks, they used much more energy than necessary, they required a lot ofIT personnel to keep them running - the list goes on.
Do your own R&D
Maybe they where not the first ones to realize that the traditional IT hardware was not goodenough for the services they were building. But because of their scale they were able todotheir ownR&D and innovate where the typical data centre manager ofan enterprise data centre or colo can not. So these hyperscalers decided to design their own hardware. And with a lot of success.
But then an idea came up: what would happen if they published those hardware designs? And invite other companies to help improve these designs and maybe even use them intheir own product development and innovation projects? Would that help to increase the speed of innovation in the data centre space?
That is in a nutshell how one of the most successful open source hardware projects - the Open Compute Project - was born. And the answer to that question about speed ofinnovation is absolutely: yes very much so.
Three reasons why
So why is that? Why isan open source hardware movement like the Open Compute Project (OCP) able to increase the much needed speed of innovation in the data centre?
I think there are three reasons:
OCP Summit in Amsterdam
One of the remarkable characteristics ofan open source project like OCP is the incredible speed at which developments take place. Trying to keep up with all the technological innovations and other developments can sometimes be a challenge.
That is why the Open Compute Project is organizing its first regional summit. The first one will bein Amsterdam on October 1-2. These two days will be packed with presentations byboth users and vendors. And quite a few OCP projects will present updates on the R&D and innovations they are working. Besides that there will bean exhibition floor where you can meet the many companies that have developed products and services based onor inspired by OCP designs.
More info is available here: http://opencompute.org/events/regional-summit/.
By Cindy Rose, Chief Executive of Microsoft UK
It is easy to take technology for granted. People can work from anywhere, collaborate like never before and share ideas in new and exciting ways. An entire generation is growing up with the world literally at their fingertips; every piece of information ever discovered is just a click away on their mobile phone.
Spencer Fowers, senior member of technical staff for Microsoft’s special projects research group, prepares Project Natick’s Northern Isles datacenter for deployment off the coast of the Orkney Islands in Scotland. The datacenter is secured to a ballast-filled triangular base that rests on the seafloor. Photo by Scott Eklund/Red Box Pictures.
As our reliance on technology increases, so do the demands we place on it. This is why I am excited by the news today that a Microsoft research project is pushing the boundaries of what can be achieved, to ensure that we empower every person and organisation to achieve more. I am lucky to be able to work with some of the world’s brightest minds, and be surrounded by cutting-edge technology. However, I’m still amazed by what our staff’s passion and creativity can lead to.
Microsoft’s Project Natick team gathers on a barge tied up to a dock in Scotland’s Orkney Islands in preparation to deploy the Northern Isles datacenter on the seafloor. Pictured from left to right are Mike Shepperd, senior R&D engineer, Sam Ogden, senior software engineer,Spencer Fowers, senior member of technical staff, Eric Peterson, researcher, and Ben Cutler, project manager. Photo by Scott Eklund/Red Box Pictures.
Project Natick is one such example. Microsoft is exploring the idea that data centres – essentially the backbone of the internet – can be based on the sea floor. Phase 2 of this research project has just begun in the Orkney Islands, where a more eco-friendly data centre was lowered into the water. The shipping-container-sized prototype, which will be left in the sea for a set period of time before being recovered, can hold data and process information for up to five years without maintenance. Despite being as powerful as several thousand high-end consumer PCs, the data centre uses minimal energy, as it’s naturally cooled.
Windmills are part of the landscape in the Orkney Islands, where renewable energy technologies generate 100 percent of the electricity supplied to the islands’ 10,000 residents. A cable from the Orkney Island grid also supplies electricity to Microsoft’s Northern Isles datacenter deployed off the coast, where experimental tidal turbines and wave energy converters generate electricity from the movement of seawater. Photo by Scott Eklund/Red Box Pictures.
It is powered by renewable energy from the European Marine Energy Centre’s tidal turbines and wave energy converters, which generate electricity from the movement of the sea. Creating solutions that are sustainable is critical for Microsoft, and Project Natick is a step towards our vision of data centres with their own sustainable power supply. It builds on environmental promises Microsoft has made, including a $50m pledge to use AI to help protect the planet.
Almost half of the world’s population lives near large bodies of water. Having data centres closer to billions of people using the internet will ensure faster and smoother web browsing, video streaming and gaming, while businesses can enjoy AI-driven technologies.
Project Natick’s Northern Isles datacenter is partially submerged and cradled by winches and cranes between the pontoons of an industrial catamaran-like gantry barge. At the deployment site, a cable containing fiber optic and power wiring was attached to the Microsoft datacenter, and then the datacenter and cable lowered foot-by-foot 117 feet to the seafloor. Photo by Scott Eklund/Red Box Pictures.
I often hear of exciting research projects taking place at our headquarters in Redmond and other locations in the US, so I’m delighted this venture is taking place in the UK. It sends a message that Microsoft understands this country is at the cutting-edge of technology, a leader in cloud computing, artificial intelligence and machine learning. It’s a view I see reflected in every chief executive, consumer and politician I meet; the UK is ready for the Fourth Industrial Revolution and the benefits that it will bring.
The support from the Scottish government for Project Natick reflects this. Paul Wheelhouse, Energy Minister, said: “With our supportive policy environment, skilled supply chain, and our renewable energy resources and expertise, Scotland is the ideal place to invest in projects such as this. This development is, clearly, especially welcome news also for the local economy in Orkney and a boost to the low carbon cluster there. It helps to strengthen Scotland’s position as a champion of the new ideas and innovation that will shape the future.”
Engineers slide racks of Microsoft servers and associated cooling system infrastructure into Project Natick’s Northern Isles datacenter at a Naval Group facility in Brest, France. The datacenter has about the same dimensions as a 40-foot long ISO shipping container seen on ships, trains and trucks. Photo by Frank Betermin.
I’m proud that some of the first milestones achieved by Project Natick will occur in UK waters, and hope that the work being done in the Orkney Islands will be replicated in similar data centres in other locations in the future.
Peter Lee, corporate vice-president of Microsoft AI and Research, said Project Natick’s demands are “crazy”, but these are the lengths our company is going to in order to make potentially revolutionary ideas a reality.
Only by demanding more of ourselves as a technology company will we meet the demands of our customers.
Spencer Fowers, senior member of technical staff for Microsoft’s special projects research group, prepares Project Natick’s Northern Isles datacenter for deployment off the coast of the Orkney Islands in Scotland. The datacenter is secured to a ballast-filled triangular base that rests on the seafloor. Photo by Scott Eklund/Red Box Pictures
Managed services providers (MSPs) must see the wider picture in order to provide the strategic vision that resonates with customers. That strategic partner status is a vital part of the relationship but needs nurturing as part of that all-encompassing vision.
It is clear from research by Gartner and others that MSPs globally are facing a multi-faceted struggle to grow their businesses. On the one hand, there is competition from the public cloud players such as AWS, Google and Microsoft who set pricing levels and have reach but are unable to customise or tailor their offerings to specific verticals. The bulk of MSPs are much smaller, and specialist in technologies covered and markets, but need scale to build their profitability and have limited resources.
There is a clear move to consolidate among larger players, with Gartner saying that, when compared to 2017, the entry criteria have become much harder and stringent. The focus has squarely shifted to hyperscale infrastructure providers, it says, and this has resulted in it dropping more than 14 vendors from its top players list. According to Gartner, there are no more visionaries and challengers left in the market; only a handful of leaders and niche players driving the momentum.
At the other end of the market, among smaller players, the pace of competition has stepped up and they are feeling a major pressure to differentiate, either on skills, markets covered, geographical coverage or in customer relations.
This is a common feature of the MSP market on a global scale. The answer is always to build and then demonstrate expertise and understanding in the marketplace. As Gartner's research director Mark Paine told April’s European Managed Services Summit: “The key to a successful and differentiated business is to give customers what they want by helping them (the customer) buy”.
One way to win more customers is by showing them their place in the future, according to Jim Bowes, CEO and founder of digital agency Manifesto, and Robert Belgrave, chief executive of digital agency hosting specialist Wirehive. The two experts will be covering the marketing aspects at the UK Managed Services & Hosting Summit in London on September 29th. Agenda here: http://www.mshsummit.com/agenda.php
They draw on a wider experience, arguing that the managed services and hosting industry isn’t the only one having to undergo rapid adjustments due to technological advances and changing customer expectations. Customers are experiencing the same disorientation, and need help figuring out how their IT infrastructure needs to evolve over the next five to ten years. Which means it’s time to ditch the old marketing models built on email lists and dry whitepapers. It’s time to get agile, personalised and creative, they will say.
A key part of the event will also be hearing from the experiences of MSPs themselves and looking at established winning ideas. MSPs already confirmed will relate their stories on business-building including how they use managed security services, how MSPs can position security without terrifying the customer, and how they work with customers to keep their lights on.
Now in its eighth year, the UK Managed Services & Hosting Summit event will bring leading hardware and software vendors, hosting providers, telecommunications companies, mobile operators and web services providers involved in managed services and hosting together with Managed Service Providers (MSPs) and resellers, integrators and service providers migrating to, or developing their own managed services portfolio and sales of hosted solutions.
It is a management-level event designed to help channel organisations identify opportunities arising from the increasing demand for managed and hosted services and to develop and strengthen partnerships aimed at supporting sales. Building on the success of previous managed services and hosting events, the summit will feature a high-level conference programme exploring the impact of new business models and the changing role of information technology within modern businesses.
You can find further information at: www.mshsummit.com
When data centre superpower Next Generation Data (NGD) began the 100,000 square foot, ground floor expansion of its high security data centre in Newport, they relied on trusted supply chain partners Stulz UK and Transtherm Cooling Industries to deliver a substantial package of temperature management and plant cooling technology.
Renowned for its industry leading 16-week build-out programmes, NGD, and funding partner Infravia Capital Partners, demand not only quality systems in the construction of their sites, but product design and logistical flexibility from their supply chain partners, in order to maintain such strict project timescales.
With an ultimate capacity of over 22,000 racks and 750,000 square feet, NGD’s Tier 3+ data centre is the biggest data centre in Europe and serves some of the world’s leading companies, including global telecommunications provider BT and computer manufacturer IBM.
Known for providing large organisations with bespoke data halls constructed to the highest standards, NGD’s South Wales campus is one of the most efficient data centres in Europe with impressively low Power Usage Effectiveness ratings (PUE) – a data centre specific measurement of total power consumption.
Having secured additional contracts with a number of Fortune 500 companies worth £125 million over the next five years, NGD has developed the capacity to respond at a rapid speed in delivering the private and shared campus space required to fulfil the exacting needs of their world-class customers.
For this particular expansion project, NGD specified 114 data centre specific GE Hybrid Cooling Systems from leading manufacturer Stulz UK, plus a combination of 26 high performance horizontal and VEE air blast coolers and pump sets from industrial cooling technology specialist, Transtherm, to manage the inside air temperature of the new campus expansion.
Phil Smith, NGD’s Construction Director, comments on the company’s industry-leading build-out timescales:
“Responding to global market opportunities is an important part of data centre best practice standards and our 16-week build-out programme allows us to lead from the helm when it comes to meeting demand, on time.
“Completing a build of such scale and complexity within just four months requires more than 500 construction workers to be permanently on site. To keep things moving at the right pace, suppliers are required to adjust the design and build of their products in accordance to the build schedule and deliver them just in time for installation to prevent costly delays to NGD and our local contracting firms. The solution provided by Stulz and Transtherm is a great example of how data centres can work with trusted, reliable and dedicated supply chain partners.”
A three-part delivery solution
As long-term suppliers to NGD, both Stulz UK and Transtherm understood the importance of just-in-time deliveries so that the new air conditioning system did not impact the build speed on site. With a usual lead time of eight weeks for its GE Hybrid technology, Stulz UK set about devising suitable production alternations which would enable them to deliver their equipment within NGD’s rapid build programme.
Mark Vojkovic, Sales Manager for Stulz UK, explains:
“Specified for installation into the floor of the new campus expansion, we altered the manufacturing process of our GE hybrid units to enable us to deliver the technology in two halves. First to be delivered was the fan bases, which were installed onto their stands during the earlier stages of the build, just in time for the construction of the suspended floor. Later in the build programme, between weeks 10 and 12, Stulz UK delivered the upper coil sections of the air conditioning units and Transtherm delivered, installed and commissioned their equipment on the outside of the building.”
Tim Bound, Director for Transtherm added:
“Supplying a data centre superpower like NGD requires a reliable and creative supply chain solution which can not only work in tandem to deliver the most efficient product packages, but also communicate effectively to deliver products from multiple manufacturing sites ‘just-in-time’ in order to maintain their industry leading build-out times. It’s vital on projects of this size that manufacturing partners can see the bigger picture and adjust their own project parameters to suit.
“In this instance, NGD had 500 construction workers on site each day, working to an industry leading deadline. It was imperative that Stulz UK and Transtherm were appreciative of the on-site complexities so that we could deliver and install our plant with minimal disruption.
“This project is a real testament to how Stulz UK and Transtherm can combine their technologies, engineering know-how and logistical capacity to deliver a substantial project, within potentially restrictive time and installation constraints.”
The technology in focus
The Stulz GE system utilises outdoor air for free-cooling in cooler months when the outside ambient air temperature is below 20°C, with indirect transfer via glycol water solution maintaining the vapour seal integrity of the data centre.
The indoor unit has two cooling components, a direct expansion (DX) cooling coil and a free cooling coil. In warmer months when the external ambient temperature is above 20°C, the system operates as a water-cooled DX system and the refrigeration compressor rejects heat into the water via a plate heat exchange (PHX) condenser. The water is pumped to the Transtherm air blast cooler where it is cooled, and the heat rejected to air.
In cooler months below 20°C external ambient temperature, the system automatically switches to free-cooling mode, where dry cooler fans are allowed to run and cool the water to approximately 5°C above ambient temperature before it is pumped through the free cooling coil. In these cooler months dependant on water temperature and/or heatload demands, the water can be used in “Mixed Mode”. In this mode the water is directed through both proportionally controlled valves and enables proportional free cooling and water-cooled DX cooling to work together.
Crucially, 25% Ethylene glycol is added to water purely as an antifreeze to prevent the dry cooler from freezing when the outdoor ambient temperature is below zero.
A partnership approach
Stulz UK has specified Transtherm’s leading air blast cooling technology as part of its packaged air-conditioning solution for around 10 years.
Mark Vojkovic continues: “Transtherm’s air blast coolers and pump sets complete our data centre offering by fulfilling our requirement for outside plant which is efficient, reliable and manufactured to the highest standard. Their engineering knowledge is unparalleled and the attention to detail and design flexibility they apply to every project matches the service we always aspire to give our customers.”
Situated around the periphery of the building and on its gantries, Transtherm’s 26 VEE air blast coolers are fitted with ERP Directive ready fans and deliver significant noise reduction when compared to other market leading alternatives, in accordance with BS EN 13487:2003.
Tim Bound concludes:
“Stulz UK and Transtherm are proud to be part of NGD’s continued expansion plans as they move at speed to meet market opportunities within their sector. This most recent expansion showcases the many benefits of established supply chain relationships, especially on campus build-outs which need to deliver unrivalled quality on the tightest deadlines.”
Since it was founded in 1947, the STULZ company has evolved into one of the world’s leading suppliers of air conditioning technology. With the manufacture of precision air conditioning units and chillers, the sale of air conditioning and humidifying systems and service and facility management, this division of the STULZ Group achieved sales of around 420 million euros in 2016. Since 1974 the Group has seen continual international expansion of its air conditioning business, specializing in air conditioning for data centers and telecommunications installations. STULZ employs 2,300 workers at ten production sites (two in Germany, in Italy, the U.S., two in China, Brazil, and India) and nineteen sales companies (in Germany, France, Italy, the United Kingdom, the Netherlands, Mexico, Austria, Belgium, New Zealand, Poland, Brazil, Spain, China, India, Indonesia, Singapore, South Africa, Australia, and the U.S.). The company also cooperates with sales and service partners in over 135 other countries, and therefore boasts an international network of air conditioning specialists. The STULZ Group employs around 6,700 people worldwide. Current annual sales are around 1,200 million euros.
For more information visit www.stulz.co.uk
For further press information please contact:
Debby Freeman on: +44 (0) 7778 923331
Every organisation faces the imperative need to transform its operations and business model in order to compete in highly dynamic, digital business environments. For many this transformation must be underpinned by robust information systems that can adequately support future growth. However, many corporate networks are incapable of supporting additional capacity and in fact IDG research suggests that upgrading existing wide-area networks (WAN) remains a priority for CIOs and IT professionals. By Paul Ruelas, Director of Product Management at Masergy.
More and more companies are embracing Software-Defined WAN (SD-WAN) and as the digital evolution of business exponentially accelerates, there have been profound impacts on innovation and the role of the CIO. As decision-makers look to improve both the efficiency and effectiveness of their overall IT environment, it’s important to explore changes in IT delivery and the benefits that SD-WAN can bring.
Many CIOs are increasingly facing complex challenges when it comes to connectivity, including additional numbers of devices, locations and users. This can lead to mounting costs as well as a need for improved security for employees – whether they be working in geographically dispersed offices or remotely. When combined these challenges illustrate the need to move beyond legacy WAN administration and cost structures and consider the potential benefits of adopting an SD-WAN managed services offering.
With many employees adopting new working practices, the workforce of today has become increasingly reliant on innovative new applications that utilise more data and require robust application response and service levels. As new applications and workloads continue to evolve and drive digital business they also continue to strain the WAN. It would be unfeasible for every workload or application to be treated as a high-priority on the network. This leaves network managers/administrators and IT staff carefully balancing service levels and costs.
As overall IT budgets flat line, increasingly CIOs are expected to do more with less, all while not dropping the ball on existing infrastructure and investments that need to reach the end of their contracts. Harnessing managed services partners with agile and scalable solution platforms is key to being able to focus scarce IT resources on the things that really differentiate the company. Additionally, SD-WAN has the ability to make technology more accessible, instantly programmable, and linkable through powerful integration tools. Hybrid networking and software defined platforms help CIOs build a blended approach, augmenting legacy systems with the advantages of modern capabilities. SD-WAN provides far greater agility as well as the ability to match the network capabilities to the application’s needs. This can be done by introducing features such as application-based routing which builds intelligence into the network to understand the various applications and their particular bandwidth requirements. SD-WAN also offers the following benefits to CIOs looking to meet evolving business needs and support digital transformation efforts:
Meeting key challenges
The pressures on the corporate WAN are increasing dramatically as organisations deploy more demanding applications and workloads on their digital transformation journey. IT professionals recognise that the existing WAN cannot meet these requirements so are looking to solutions such as SD-WAN because it offers the agility, control, and efficiency essential for delivering a platform that can support a dynamic organisation.
With the rapid rate of change, CIOs are taking the brunt of the impact. To succeed they should continue to focus on their core competencies. Utilising technologies like SD-WAN can help to build extreme responsiveness and agility into the business’ infrastructure. For instance, SDN can be used to quickly and easily create experimental environments which easily allow the testing of new disruptive technologies – driving down the time-to-value and time-to-market.
Energy costs, reliability and air conditioning are all important criteria for choosing an external data center. But today’s key considerations also include efficient process implementation and, above all, fast and secure access to global cloud and application providers. Private, public and hybrid cloud solutions demand a hacker-protected connection with fast response times and colocation offers the ideal platform for this, enabling providers to act as a digital marketplace for the implementation of IT concepts. By Volker Ludwig, Senior Vice President Sales at e-shelter in Frankfurt.
Due to the increasing digitisation of our lives, the amount of data being stored in data centers will continue to increase dramatically. According to IDC's 2017 study, Big Data and the Internet of Things will reach a data volume of approximately 163 zettabytes (ZB) by 2025. For comparison: In 2016, only one-tenth of this data volume (16ZB) was produced. In addition, the same IDC study predicted that this generation of data will increasingly shift to the enterprise sector. But many enterprise data centers will not grow to the same extent. Data centers must evolve and grow in line with this data growth.
From colocation to the marketplace
Colocation today is about much more than just putting racks together – the classic colocation model is changing. Activities range from optimising the physical footprint, to resolving redundant power supplies and connectivity services. The increasing pressure on companies to provide their users with modern applications, some of which run independently in public clouds, presents CIOs with completely new challenges. In many cases, an unplanned multi-cloud approach has already developed in order to meet users’ needs, but this can be complex and uneconomical especially when departments go rogue and start independently purchasing services from third-party providers without the IT team being involved.
It is now that data centers (DCs) have the opportunity to move into a new role. They can offer companies an ecosystem in which they can choose the optimal solution for them, just like in a marketplace. The colocation DC now has the central role of advising the customer, for example, customers may want access to various cloud providers – such as AWS, Microsoft, Google, Alibaba, Softlayer etc. - enabling them to tailor their multi or hybrid cloud strategy flexibly, inexpensively and across multiple providers. This variety also lets them implement platforms in an environment that ensures the compliance of operating systems and reduces operational risk through a secure infrastructure and service level agreements.
Scalability and redundancy
Outsourcing parts of the DC provides more scope for growth. Colocation vendors are continually expanding their DC capacity in distributed locations to meet global data growth and allow users to tailor their capacity to their current needs. For customers, having growth options are essential. This ensures that they can flexibly book capacities based on their specific business needs and, if necessary, terminate or distribute them to any number of DCs in the provider's network.
This enables a wide range of redundancy concepts with outsourced server capacities sent to mirrored data centers with separate paths and supply networks. Connections can be made via any carrier and even over several network providers with the necessary service level agreements and users can also choose between different service providers.
Flexibility and choice in a protected environment
In the modern colocation center, users have access to numerous partners with different expertise on site, which they can access flexibly: from various carriers and cloud providers, to system integrators. A technology change from pure colocation to the use of cloud solutions is made easy for users, who can easily access interesting solutions from a wide range of services. The Colocation DC in its function as a marketplace also gives users access to a large selection of potential partners offering comparable services. This increased competitive pressure in turn has a positive effect on quality and affordability.
Technically, the spatial proximity aids the realisation of hybrid cloud solutions. Since the applications are only separated by one cross-connect, latencies are minimal. This allows a seamless transition from a traditional or private cloud application on a colocation-powered server, to a provider's public cloud application in the same data center. In this way, peak loads of applications in the public cloud can be intercepted and for enterprise IT teams, which brings a high degree of flexibility and agility.
Partner networks can be leveraged to offer customers a variety of solutions and services for planning, implementing, migrating and managing their hybrid cloud environments. Concrete solutions can be used to manage the hybrid cloud environment, dynamically managing resources and applications. And with the increasing use of public cloud offerings, bandwidth requirements and low latency can be realised through direct access to the cloud solutions.
The future is getting more diverse
In colocation DCs, customers and cloud providers operate a wide variety of applications, each with very specific requirements. Developments such as Industry 4.0 and the Internet of Things are raising the importance of issues such as real-time capability and distributed data centers.
With the growing flood of data impacting our world, the diverse range of applications from the cloud, along with the evolution of innovative applications - such as with 5G - companies will increasingly outsource data center capacity, including a greater number of applications.
Specialist colocation data centers need to take on a more advisory role, consulting customers on the planning, management and maintenance of the infrastructure and advise on the selection of suitable partners, to help their customers grow and thrive.
IT enterprises are seeking to outsource non-strategic functions while enabling more flexible, scalable, and cost-efficient infrastructure. Although the public cloud is grabbing the majority of media attention, other market offerings are quietly transforming IT operations as well. By Chris Adams, President and Chief Operating Officer at Park Place Technologies.
Data Centre-as-a-Service (DCaaS) is among the largely under-the-radar changes in the Infrastructure & Operations field. DCaaS refers to offsite data centre facilities made available to lease for clients. Customers access physical space and security, rack installations, bandwidth, power and cooling, and in many cases, storage, server, networking, and other hardware from the DCaaS provider. The DCaaS concept has its root in data centre colocation, but the term is generally applied to the most recent incarnation designed to support the on-demand resource-provisioning requirements of agile business.
Whereas colocation was primarily sold as an answer to capacity problems—when expansion of an on-premises data centre was not practical for real estate, technology, cost, or timing reasons—DCaaS has attracted businesses of all sizes for a wide variety of strategic purposes. IT leaders stuck on the horns of a dilemma, between public cloud and on-premises data centres, may find a lot to like in DCaaS.
When DCaaS wins over IaaS/PaaS
In the Anything-as-a-Service (XaaS) sphere, Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS) have received their fair share of coverage. Inherently cloud-based, these technologies have helped enterprises shift the IT burden off internal data centres, staff, and the capital expense (CapEx) sheet.
The problem, however, is that IaaS and PaaS aren’t appropriate for all enterprise technology needs. As Gartner has pointed out, most companies are pursuing a hybrid IT model, one in which public cloud combines with privately owned and/or controlled assets. The latter approach remains the go-to for many applications involving sensitive data, strict compliance requirements, or business-differentiating solutions. In these cases, the shared tenancy and black box technology of the public cloud frequently cannot provide the desired ability to directly select and “own” the environment.
The rapid uptake and broad applicability of the public cloud notwithstanding, the industry is finding that bare metal and IT-provisioned hardware assets still have their place—often at the centre of the enterprise’s core business needs.
The many strategic applications of DCaaS
At the most basic level, DCaaS offers a degree of control similar to that of on-premises data centres, without the costly and time-consuming build-outs. Early colocation solutions thus focused on the leased space as easy-to-startup capacity, but DCaaS has uncovered a wide range of unfilled strategic needs it can answer.
IT organisations are looking to DCaaS to advance their:
With so many strategic advantages to DCaaS, it’s a wonder more IT organisations haven’t shifted in this direction. The reason? The forces of complexity work against all but the best planned DCaaS implementations.
Enter the chimera
The fact is, DCaaS cannot usually replace other XaaS services, whether those include salesforce.com for the business development team, Google machine learning tools for the marketing gurus, or virtual machines from Amazon Web Services for the DevOps folks. Most DCaaS use cases will combine with public cloud and on-premises IT within a complex enterprise IT ecosystem.
The move to hybrid IT—using the right environment for the business need—has created a multi-headed beast. Whereas the Chimera of Greek mythology combined lion, goat, and snake into a fire-breathing monster, today’s enterprise IT mixes traditional, cloud, and colocation options into a new species of IT that can be difficult for data centre managers and CIOs to get a handle on.
There are ways to synergistically align DCaaS with other elements of the overarching IT architecture. This is an area where I spend a lot of time assisting clients. Most often, they have an on-premises data centre (or several), which they may be keeping, consolidating, or dismantling. They may be accustomed to colocation or totally new to everything about DCaaS. And more than likely, they’ve got various applications—often hundreds of them—running in the cloud.
We help to address the three M’s of their DCaaS implementation:
Third party maintenance is just one example of how to find simplicity in a hybrid environment. As enterprises spread their assets and applications over a more diverse landscape, unifying any possible functions under the auspices of a high-quality outsourcing provider will be ever-more important in maximising uptime, relieving internal staff, enabling focus on strategic objectives, and completing the digital transformation on which business success relies.
The idea of digital disruption isn’t new. Virtual businesses - we now have taxi companies without taxis and hotel chains with no hotels - have fundamentally changed the competitive landscape. And as consumers, we can do everything online - shop, make friends, go to school and even find love.
How VIRTUS Data Centres helped Imperial College on their digital journey.
By Darren Watkins, managing director for VIRTUS.
However, whilst commercial businesses are nearly all digitally savvy, it could be argued that organisations in the education sector are lagging behind. The structure and processes of many institutions have remained largely unaltered. We still have debates about the legitimacy of mobile learning and inflexible exam systems are seen as ultimate measures of success. But there are some trailblazers, who have brought their Universities up to speed. Imperial College is one such institution which is seizing the opportunity to use technology to dramatically enhance teaching and learning and deliver value to digitally savvy students.
Imperial College’s challenge
Imperial College London is a world-class university with a mission to benefit society through excellence in science, engineering, medicine and business. Delivering learning in fields that are powered by technology means that tech is, of course, firmly embedded in Imperial College’s DNA. In addition to the specialist technology used by teachers and students in their field of study, institution wide systems (such as virtual learning environments) help the college to cater for a diverse, changing and tech-led student base.
However, this commitment to digitally powered pedagogy this means that Imperial College London is reliant on technology in order for its employees and students to function on a daily basis. It’s no exaggeration to say that ensuring its technology provision is resilient and future proofed is make or break for the university.
Paul Jennings, head of ICT service operations at Imperial College London, tells us that when he joined in 2014, the College had underinvested in a vital piece of infrastructure - its data centre. “Firstly, both of our data centre facilities were situated on the same campus,” he says. “This gave us an obvious single point of failure”.
For Imperial this was a valid, and pressing, concern. The University had already suffered from a number of power outages, cooling and UPS failures and even water damage from building work to its on-premise data centre - incidents which didn’t just cause a temporary loss of service but had significant knock on effects to productivity and research.
So, with Imperial’s core IT and Research capability at stake, these incidents proved that the data centre really did sit at the heart of their organisation: vitally important to ensuring research, teaching and learning - and even relationships with suppliers and customers - could run smoothly.
Towards a solution
Looking to overhaul its data centre strategy, Imperial faced the perennial build vs buy dilemma. Did it make more sense to invest in an in-house data centre, outsource infrastructure requirements - or even retrofit a previous system?
For Imperial, the idea of outsourcing to a colocation provider gave the best protection against increasing data centre complexity, cost and risk. But perhaps most crucially the colocation option addresses reliability concerns, helping increase availability and improve disaster recovery as well as support business continuity strategies. Simply, when outsourcing with multiple connectivity options, the potential for carrier failure is reduced, protecting critical applications and infrastructure performance. Additionally - if disaster does strike - it’s these companies’ business to get you up and running again as quickly as possible. And, with resilience such a crucial tenet of success, the expertise of dedicated providers was compelling.
Lastly was the issue of power. In fields like medical research, huge computing power is required. Computing models that can accelerate deep learning to the point where results are delivered quickly enough to be useful require High Performance Computing (HPC) strategies - which must be supported by super-reliable, superfast, infrastructure that is only available from the latest next-generation of data centre providers.
So, to enhance availability, resilience, security, and importantly, to provide expansion for future growth, Imperial chose VIRTUS Data Centres to power its Data Centre Relocation project. The three-year programme will move the college’s data centre provision to new state-of-the art facilities in Slough. The Slough facility, LONDON4, offers scalable services designed to support high density computing and the flexibility, resilience, scalability and reliability that Imperial needs. It also provides simple and resilient access to public clouds for the ideal hybrid cloud solution location that Imperial is moving to towards for fast and reliable access.
The VIRTUS LONDON4 facility is part of a shared research institution framework agreement, brokered through Jisc, which ensured value for money. By using a data centre contracted through a higher education framework, Imperial College is co-located with 23 university and research tenants including The Francis Crick Institute, University College London, University of Bristol and Kings College, affording opportunities for collaborative working. The data centre also has a direct connection to the JANET network to provide fast, low cost connectivity for research collaboration.
Importantly, the partnership with VIRTUS has allowed Imperial to adopt an innovative approach to designing its architecture, built to include a multi zoned network - which separates its data centres on the network to vastly improve resilience.
Implementation and adoption
But, even with a digitally progressive team IT overhauls can be daunting. A move away from on premise infrastructure could have represented a loss of control for staff. And it was important that the team at Imperial recognised this was a cultural, not just a technical challenge.
Crucial to success of this project was ensuring that faculty knew that this move was not outsourcing IT, that it was more simply “Imperial’s data centre, in a more resilient and secure facility” as Paul Jennings puts it. And to stress the benefits to the institution, students, and the positive impact on the research community. Crucially, the team at Imperial was able to link the project back to pedagogical goals - and stress its role in the ability to provide a tech first approach to delivering education, with quality and uptime assurances.
It’s this pedagogical goal which takes us back to the start of this article. Virtual businesses, operating out of a back bedroom, but with acres of digital infrastructure, are making the headlines. So, it’s crucial for any organisation to think about how any physical space they do have is used. And for Imperial, it was important that its Kensington location wasn’t used to store IT - effectively making it redundant space. Instead, by relocating its data centre provision, this space could be repurposed for teaching, research and learning - bringing immediate and significant benefits to students and staff alike.
The role of a chief data officer (CDO) has changed significantly over the past decade. Ten years ago, the job title was, more typically, data processing manager or head of data processing, and the job itself tended not to be recognised as a driver of added value. By Ken Mortensen, Data Protection Officer, Global Trust & Privacy, InterSystems.
Today, few senior decision-makers within technology businesses go more than a day without running up against privacy and security. The world has become so digitally interconnected that the role of the CDO has shifted from being about meeting operational requirements to performing the difficult balancing act between ensuring compliance and helping deliver the organisation’s strategic goals.
In line with this, it is crucial that CDOs avoid simply vetoing projects or putting a brake on them arising from compliance concerns. Instead, when dealing with any data request, they need first to understand what are the goals or objectives of the person making the request and then try to facilitate the request in a way that drive the operational goal, but at the same time protects privacy and safeguards security.
It could be that a direct approach does not work, simply because the law prevents it, but there may still be another way to achieve the same result. For example, most global laws stipulate that certain types of information cannot be used without the express consent of the individual. Yet, there may be related types of information that can be used under the right circumstances.
In the healthcare space, it is not unusual to receive requests for records showing a patient’s date of birth for use in specific kinds of project. Typically, in such a case the CDO will ask the obvious question: “Why do you need this piece of information?” to which the answer will generally be: “Well we need to know how old somebody is.”
In reality, the date someone was born does not tell you that. It tells you how to calculate how old they are. The data you are really looking for is not the date of birth of the individual concerned, but it is their age. And in this context, age is less critical and less sensitive than date of birth.
This is a perfect example of how a CDO can help to answer the key question – is there another way to get to the objective on which we are focused– in this case how old someone is - without exposing sensitive data and potentially breaching compliance requirements?
Another common example is where someone from the business asks for the names and addresses of everyone in a given area of a town with a view to understanding how many people are buying a product from this area. Again, by looking at the objective rather than just the request, the CDO can discover that the person does not actually need (or truly want) any personally identifiable information. They don’t need to know who these individuals are, but simply how many of them bought the product.
In line with this, the role of the CDO should never be simply to block progress but rather to help people within the business understand what they are looking to achieve and if there is another way of doing it that better meets compliance needs. Defining the objectives in terms of the goals rather than the inputs.
The negotiating and consultancy element of this is important because many people within the organisation immediately focus on the data that they think they can access rather than looking first at their end goal and what they require to achieve it. That’s where the CDO can help de-conflict ostensibly opposing needs for compliance and business benefit - and ultimately still get to the same result.
Of course, there will inevitably be certain objectives that simply cannot be fulfilled for compliance reasons. But for the most part, if organisations are acting ethically and within the law, there will be a way that they can achieve their business goals with respect to data. It is key that CDOs look to facilitate this. They don’t want to be regarded as an impediment. After all, no CDO that is perceived to be preventing the organisation from achieving its goals, will ultimately ever be successful.
At the end of the day, the CDO needs to understand what the organisation is trying to do and give them the ability to do it in a way that is not only compliant but also builds trust. Not only for the organisation and its customers and consumers, but also for their own role within the organisation. That way, they can get buy-in and be seen as part of the solution rather than an inhibitor to it.
We all experienced the infamous 3G to 4G mobile network transition. Although the specifications and standards for 4G were established a decade ago, they took a very long time to materialize. The International Telecommunications Union-Radio (ITU-R) decided upon the specifications for 4G back in 2008. We have seen a number of different names appear for networks that promise 4G data speeds, many of which provide very different results to consumers. By Steven Carlini, Sr Director, Data Center Global Solutions, Schneider Electric.
Winston Churchill said it best: “Those who fail to learn from history are doomed to repeat it.”
The problem with creating standards is twofold. Firstly, the standards aren’t strictly enforceable as the ITU-R has no control over carrier implementations. Secondly, the transition from an old standard to a new one doesn’t happen overnight. There’s a long period where early networks don’t necessarily match up with what consumers expect. Although most advanced 4G LTE markets are past this stage now, these network types are still developing in some countries and the issue is bound to rear its head again as we move towards 5G.
The “buzz” around 5G hit a peak in 2016, in my opinion. People claimed sub 1ms latency would make possible applications like immersive holograms, kinesthetic communication, and haptic technology. Imagine a world where you can interact with – touch and feel – a computer-generated image. You could spar with Muhammad Ali in his prime! Other possibilities include remote controlled surgery. Imagine being in Australia while a heart specialist in Boston is operating on you – true lifesaving technology!
So, when will we see 5G?
The standard ITU-R report on IMT-2020 radio interface technical performance was discussed by ITU-R Study Group 5 at its November 2017 meeting and it is the working specification from which the industry will work. The total download capacity for a single IoT enabled 5G mobile cell must be at least 20Gbps, the International Telecommunication Union (ITU) has decided. In contrast, the peak data rate for current LTE cells is about 1Gbps. The incoming 5G standard must also support up to a million connected devices per square kilometer, and the standard will require carriers to have at least 100MHz of free spectrum, scaling up to 1GHz where feasible. Let me be clear, the 5G standard (or collection of standards) is not officially defined, and telecom companies are falling over each other to gain 5G leadership.
The kind of speeds promoters of 5G tout require high-density concentrations of cells as well as data. In the United States, AT&T is planning to be the first U.S. carrier to have 5G available in 12 cities by the end of this year. Asian telcos seem to have the inside track on getting 5G off the ground and into the air. Cities such as Hong Kong, Singapore, Seoul, Shanghai, and Tokyo could be where large-scale adoption of 5G becomes standard starting as early as 2020. In Europe, the United Kingdom and Ireland seem to be the best bets to get the next generation of mobile on-line.
However, many of these rollouts will have maximum transmission of only up to 15Gbps at a latency of less than five milliseconds. And that’s the best case. These speeds are not going to enable real life holograms. We can only hope that the industry has learned a lesson – it will not disappoint consumers and potentially create negative backlash that could delay the 5G rollout or extend its launch into two or three more decades.
Educate yourself on 5G, and other helpful resources
Making smarter choices for the future means being as informed as you possibly can be. For data center owners or colocation providers, that means educating yourself on the trends and terms in the industry that will impact how you enable your digital demands. Check out International Telecommunications Union-Radio website to stay on top of the industry’s dialogue on 5G. Read the blog, How Connectivity Will Control Everything We Know, to learn which technology developments are navigating the industry. Or watch the video, Resiliency on the Edge of Your Network, to understand the importance of a resilient, secure, edge infrastructure for digital consumer demand.
In the GDPR era, where cyber security compliance is now mandatory for businesses to avoid hefty fines, organisations across the UK need to re-examine their principles of cyber hygiene to strengthen essential security systems and keep their employee and customer data safe. By Katie Curtin-Mestre, VP of Product and Content Marketing, CyberArk.
In particular, organisations should start looking at actionable steps that organisations can take to “clean up” current weaknesses and potential vulnerabilities. This is particularly important in the wake of massive attacks such as WannaCry. In fact, post-attack studies show that WannaCry’s impact could have largely been prevented if basic security best practices such as securing privileged accounts had been applied.
With this in mind, international institutions such as the US National Institute of Standards and Technology (NIST) have started to recommend securing privileged accounts, credentials and secrets as one of the most effective, preventative steps an organisation can take to bolster its security programme. The institute’s latest CyberSecurity Framework further offers useful best practices UK businesses can incorporate in their security strategy.
Many of the CyberSecurity Framework’s refinements centre around cyber hygiene. As the CyberSecurity Framework explains, attackers continually look for new ways to exploit an organisation’s vulnerabilities, so a “set it and forget it” approach is sure to fail, especially when it comes to privileged access. This is because a company’s sensitive applications and systems can change as a company grows or changes direction. For example, if your organisation secured privileged access for Windows built-in accounts on systems with access to sensitive data, then you need to go one step further and commence work on the next set of systems that deliver the most risk reduction, given the time and effort required to do so.
Since the enterprise infrastructure is ever-changing, it’s important to look for new infrastructure in the cloud and new SaaS applications that could have access to sensitive business data. To have the strongest defence against attackers, organisations need to ensure their privileged access security program is up to date and continues to protect their most critical infrastructure, applications, customer data, intellectual property and other vital assets.
The most common types of attack
To help establish and maintain a strong privileged access security programme, organisations can implement actionable processes to achieve the highest level of protection against common attacks on privileged accounts, credentials and secrets. This type of cyber hygiene program is usually most effective against the following attacks:
Implementing a privileged access security cyber hygiene program
Often, significant data breaches - including those at many large organisations - result from some of the most common attacks involving privileged access, and each example provides valuable insights into how attackers operate and exploit an organisation’s vulnerabilities.
To proactively reduce the risk posed to privileged accounts by attackers and implement strong hygiene practices, organisations typically need to:
As cyber attackers become increasingly creative with their tactics, now is the time for UK businesses to go back to basics and do a cyber hygiene check-up. Establishing privileged account management is a good starting point to help organisations reduce risk, satisfy security and meet regulatory objectives. Ultimately, it comes back to monitoring and reviewing who has access to the most sensitive data to ensure it is safe and secure.
Linguamatics is a natural language processing (NLP) text mining company. Their software is used to scan items such as journal articles and electronic health records, particularly within science and healthcare sectors. Founded in 2001, Linguamatics have become known worldwide for helping to design and improve therapies for disease and patient care in medical organisations.
Linguamatics’ software runs as both an on-premise and software-as-a-service solution. Due to the compute and storage requirements of its technology, they needed to host the SaaS solution in a separate location.
Linguamatics wanted to use custom hardware and colocate, rather than use a fully cloud-based approach. They needed a top end data centre that was close to HQ in Cambridge, as well as a colocation partner that could provide great connectivity, uptime, and easy communication.
After evaluating a number of colocation providers, Hyve was the obvious choice.
“We did an extensive review of the market to make an informed decision about where best to host our services, and Hyve undoubtedly came out on top,” explained Roger Hale, Chief Operating Officer at Linguamatics. “By offering one of the world’s best-connected and most secure data centres, it was an obvious choice to run our systems from Global Switch 2, and Hyve were able to on-board us effortlessly.”
Due to the specialised nature of the company’s architecture, it made sense for Hyve to provide the hardware to support Linguamatics services while they manage and run their own systems.
“Our backend is compute and I/O intensive, and therefore we prefer to operate our own IT. Hyve had the technical expertise and flexibility to allow us to do that, all while benefitting from their great price and quality of service.”
Linguamatics are now free to run their business and IT infrastructure in the way that suits them without worrying about availability. “We had experienced downtime with a previous vendor, and since working with Hyve from 2015 we’ve hardly ever had to ring the support team. It’s great to know that their team are always there if we need them, but that they don’t intrude with our business unnecessarily,” commented Hale.
“There is much to be said for a seamless technology service that just works, and works well. Hyve are a great example of a modern tech service provider that understand exactly what their client needs, and then delivers.”
Some of the most damaging data breaches in recent years were not the result of unstoppable cyber attackers with advanced techniques, but rather the consequence of poor internal practices. One of the most widespread problems is a lack of control over how data is collected, stored and secured. Varonis’s recent 2018 Global Data Risk Report analysed more than six billion files held by 130 organisations and found widespread issues that can lead to serious security incidents. By Matt Lock, Director of Sales Engineers, UK, Varonis.
Data past its sell-by date
Our analysis found most organisations struggled with handling old data that was no longer actively used in any operations. An average of 54 percent of all data on the networks we assessed was stale which represents a security risk, particularly when this includes critical information about employees, customers, projects and clients.
Hanging on to data for years after it was last useful not only creates additional storage and management expenses, but also creates a major, and totally unnecessary, security risk. More data means more damage can be wrought by an intruder or malicious insider, and also exposes your company to liability from regulations such as PCI DSS and the GDPR.
To address this problem, companies need to implement new policies on data collection and storage which follow the principles of privacy by design. Both the amount of sensitive data that is collected and the length of time it is stored should be kept to the absolute minimum. Data access should also follow the least privilege approach, with access restricted based on the data’s relevance to an employee’s role.
Organisations currently struggling with years of old data clogging their systems should start by assessing where there is stale data that is no longer being used, and all such files should be either deleted or archived – starting with those that fall under regulations.
Access all areas
Alongside struggling to keep track of the data itself, the majority of organisations have a difficult time managing who can access files within the organisation. Of the organisations we analysed, 58 percent had more than 100,000 folders open to all employees and 41 percent of companies had at least 1,000 sensitive files open to all employees. In total , 21 percent of all the folders in our investigation were open to everyone.
This is usually the result of an organisation having no policy in place for creating new files and folders, with most defaulting to global access groups set to Everyone, Domain Users, or Authenticated Users.
Having sensitive files with no access controls makes it incredibly easy for an intruder on the network to hoover up large amounts of key data such as intellectual property and customer data. The threat of a malicious insider is also increased as even a junior employee will be able to read, copy and delete important data.
The most effective way to regain control of how data is accessed is to run a full audit of all servers. The aim should be to identify any data containers such as folders, mailboxes and SharePoint sites, that have global access groups applied to their ACLs (access control lists). All global access groups should then be replaced with tightly managed security groups that include only appropriate users that have been cleared to access the data. On an ongoing basis, all new files and folders should be given appropriate controls using the least privilege approach.
The haunted server
As well as struggling to control live users in their systems, most organisations also have a major problem with ghosts. These are old accounts that should be inactive, but still retain their full capabilities and access rights. Our analysis found that an average of 34 percent of all user accounts in an organisation were ghosts. These old accounts have usually been entirely forgotten by IT administrators, and their activity is no longer monitored. This makes them a dangerous tool in the hands of an external attacker, as they can be compromised and used to navigate the network undetected. Having ghosts in the system also introduces the risk of a former employee logging back in to access sensitive files. These could be used to gain favour with a new employer, or even sold or leaked online if the employee left on particularly bad terms.
This risk can be taken care of by ensuring that policies are in place to delete or disable all accounts once the user leaves the organisation. Existing ghosts can be identified by using behavioural analysis to understand what constitutes normal user behaviour, making it easy to spot accounts that are inactive or showing unusual activity.
Without proper policies and processes in place, managing how data is stored and accessed can quickly spiral out of control. However, by conducting some thorough auditing and implementing strict new practices, even the most lawless server can be reigned back in and kept safe from both malicious insiders and network intruders.
The number of companies believing in the efficacy of managed security services is growing year on year – the figure is now up to 60%, compared to 56% last year. Moreover, 42% of SMBs used managed security services in the last year, as did 51% of enterprises. These companies have already woken up to the fact that using a managed service provider is in many instances an obvious solution to a company’s security requirements. But the stats also show us that not every company feels the same way. Yet. By Russ Madley is Head of SMB, Kaspersky Lab UK and Ireland.
Worryingly, figures from Kaspersky Lab’s latest annual IT Security Risks survey show that only 25% of businesses are planning to use MSPs in 2018 to tackle their IT security challenges, indicating that some companies still need to be convinced of the benefits of working with managed security service providers.
Naturally, businesses need to weigh up the pros and cons of using an MSP, to establish whether this solution is effective – and appropriate – for their organisation. The onus is therefore on the MSPs themselves to convince businesses that they can offer them the right solutions to their looming security challenges. And to do that, they need to understand the challenges businesses are up against.
What’s making businesses wary of MSPs?
When looking at what is turning businesses off from using MSPs, four main factors come into play. The first is the pricing of MSP services, which 29% of enterprises feel is too high. For the 39% of SMBs who also agreed – the companies that likely require these services the most – they may struggle to find or justify the cost in securing this support. The cost factor comes into play when also considering that, of the respondents not choosing to use an MSP, some don’t feel that they will become a target.Global IT Security Risks
19% of enterprises and 25% of SMBs feel confident that they won’t be the victim of a cyberattack. They say ignorance is bliss – but a hacked customer database, costly downtime and customer losses are all far from blissful. This situation is more likely for smaller, or newer, businesses that feel they’ll be overlooked by criminals in favour of bigger targets. But for these organisations the risk of loss would be felt most. For owners and managers emotionally tied to their company, and those without the deep pockets that their bigger competitors own, they may be risking their business outright.
For companies who are only too aware of what a cyberattack could mean for their business – and customers – the cost becomes easier to swallow, but there’s a lack of trust when it comes to giving these external providers access their systems in order to safeguard them properly. 32% of SMBs and 35% of enterprises don’t want their security to rely on a third party – perhaps unsurprising, given the access that these external companies have to be given in order to do their jobs. Personal data, trade secrets and payment information are all at stake, so a business needs to be able to explicitly trust the provider in order to hand over the keys to its kingdom. MSPs must be able to show customers that they themselves are secure too.
Some organisations are not feeling the industry-wide skills shortage as keenly as others, and already have adequate in-house resources to bridge the IT security gaps in their business – 31% of SMBs and 37% of enterprises feel that they already have sufficient resources to cover this need. However, this isn’t a permanent solution; as IT security professionals become more highly-sought, they could be tempted away to higher-profile firms by lucrative offers – whereas an ongoing MSP contract with strict SLAs ensures that a business’s security needs are covered at all times.
What MSPs can bring to the table
Despite all of this, there are plenty of reasons why businesses should be using MSPs to fulfil their IT security requirements.
The first – and perhaps biggest – benefit to companies is the cost-cutting solutions that a partnership can offer; 54% of enterprises view MSPs as a way of cutting their security-related costs, and 51% of SMBs feel the same. For companies already spending extensively on full-time staff, rigorous protection measures and staff training/awareness, using an MSP could result in a reduction on up-front IT security spend. This, of course, makes it easier for businesses to invest in security services – a vital consideration when the costs associated with suffering a breach can severely hamper a company’s bottom line and affect its forecasts.
For companies worried about their IT security strength, an MSP Service Level Agreement (SLA) can provide a safety net to guarantee a minimum level of safety. 42% of SMBs and 43% of enterprises want someone that can account for their security; knowing that this vital component of their business is being handled by an external provider can offer business owners and decision-makers peace of mind.
The most obvious advantage of using an MSP, however, is the fact that it can bridge a company’s IT security gap. 25% of enterprises and 28% of SMBs admitted that they don’t have the internal resources and expertise to provide a sufficient level of security. Not having the right knowledge or staff leaves businesses vulnerable, and sourcing this expertise from a third-party supplier can ensure that an organisation’s needs are adequately covered.
Weighing it all up…
MSPs need to help businesses weigh up the benefits and costs by understanding that prospective customers may feel concerns, and demonstrating ways to overcome these. Vendors too have a role to play, and must help MSPs to best position themselves to help customers resolve their IT Security issues.
Kaspersky Lab’s own security portfolio for MSPs includes flexible, powerful tools to monitor and manage protection for the entire customer’s infrastructure with increased visibility — for both MSPs and their customers if that’s a requirement. Adding technology training and our technical support, we help MSPs become a reliable security adviser and the first line of support for their customers.
With uptime and resilience at the forefront of their minds, many data centre managers still have misgivings over demand side response and using UPS batteries to store energy. But is it now time to see beyond these perceived risks and fully-embrace the potential rewards on offer? By Leo Craig, General Manager of Riello UPS.
Surveys suggest 77% of mission-critical enterprises such as data centres are keen to get involved with demand side response (DSR). But there’s one major caveat. They’ll only do so if it doesn’t impact their core activities.
Such a ‘have cake, eat cake’ viewpoint is understandable. On one hand, the data centre sector is facing heightened demands for storage capacity, which will place even greater burdens on our National Grid. Alternative energy models such as DSR, offering the promise of a more flexible and reliable electricity network, are obviously welcome.
But on the other hand when your top priority is 100% uptime, there’s an in-built hesitancy to use the invaluable insurance provided by your uninterruptible power supply (UPS) for anything that has potential to weaken that must-have resilience.
Data centre managers have every right to be risk-adverse, especially when an energy-related failure can cost them nearly a fifth (17%) of their annual turnover. But isn’t continuing with the status quo – namely an increasingly unreliable electricity network that makes power problems far more likely – just as big a gamble?
Most mission-critical businesses will naturally adopt a wait-and-see approach with new technologies and processes such as demand side response. They want conclusive proof of the benefits and reassurance over any potential drawbacks.
For those of us who believe in the benefits of harnessing UPS power to tap into DSR, we need to highlight how the move to lithium-ion (Li-Ion) batteries offers the best of both worlds. Because these batteries require less than half the space, you can install twice as many – one half providing battery backup, the other used for DSR.
From Lead-Acid To Lithium-Ion
Sealed lead-acid (SLA) batteries, also known as value-regulated lead-acid (VRLA), have long been the go-to option for data centre operators. They’re reliable enough and relatively inexpensive, particularly their initial purchase cost.
But they’re far from faultless. They need controlled temperatures of 20-25°C to run at peak performance, meaning either a space-eating dedicated battery room or lots of energy-intensive air conditioning in your server room. And as impedance increases over time, reducing their power capacity, they’ll frequently need replacing.
Compared to SLA, Li-Ion provide the same power density in less than half the space. They can operate safely in much higher temperatures (up to 40°C), recharge much faster, and have up to 50 times the cycle life.
Over the lifespan of a Li-Ion battery block (which can be between 10-15 years), lead-acid cells would in all probability require replacing two or even three times. So even though the initial cost of lithium-ion is higher than SLA, the total cost of ownership (TCO) over a 10-year period could be 10-40% less.
Needless to say, this doesn’t mean every new data centre UPS system will automatically use Li-Ion batteries. But in such an increasingly competitive industry, it can offer an all-important edge.
With commercial property costs high and likely to remain so, the potential floorspace savings provided by Li-Ion, not to mention reduced air conditioning bills, will help keep an operator’s running costs down. And that’s before we throw the extra economic, environmental, and energy benefits of DSR into the mix.
Powering A Brighter Future
Even the most cautious calculations indicate there’s more than 4 gigawatts of electricity currently stored in UPS units throughout the UK. Enough energy to power 3 million typical houses. And that’s with us only having scratched the surface of battery storage’s potential.
A Li-Ion-equipped, smart grid-ready UPS can pump energy back into the electricity network, provide frequency stabilisation, and perform peak shaving, delivering genuine economic and environmental wins that benefit the individual business, the National Grid, and the wider public.
It’s up to us to convince not just data centres, but other facilities with on-site backup power systems such as hospitals or utilities, to join this DSR revolution so we all reap the rewards.
Learn More About Li-Ion UPS Batteries
We have a close relationship with leading battery manufacturer GS Yuasa. This has seen us partner on numerous projects using their lithium-ion cells.
Indeed, one recent initiative combined our Multi Sentry UPS with Li-Ion battery technology to create a ‘virtual power plant’ that provides the electricity to run demand side response aggregator Kiwi Power’s London office.
Yuasa Senior Technical Coordinator Peter Stevenson tackles some of the key questions data centre managers often ask about using lithium-ion batteries with uninterruptible power supplies.
What are the main advantages of using lithium-ion batteries with a data centre UPS?
Li-Ion batteries are about 25% of the volume and weight of an equivalent SLA battery, so that’s a big benefit in settings where space is limited. They’re far less sensitive to temperature fluctuations, so for example, Li-Ion batteries operating at 30°C will have a longer life than an SLA functioning at 20°C.
In many environments this means fresh air cooling systems can be used, rather than expensive air conditioning, which is a significant plus in terms of total cost of ownership. It’s also easier to predict the aging of lithium-ion, reducing the risk of rapid loss in performance that is sometimes seen with SLA.
Finally, Li-Ion batteries typically provide up to 50 times as many deep discharge cycles as SLA. While this isn’t hugely significant in a traditional UPS, this could change if the rapid growth of solar and wind generation means a UPS’s role evolves from purely providing emergency power into an energy storage system that can generate revenue on a daily basis.
Are there any disadvantages to using Li-Ion instead of SLA?
The main and obvious disadvantage is the initial cost. Even though massive investments in automated manufacturing processes have resulted in significant cost reductions for Li-Ion over the last 20 years, the basic raw materials are still expensive compared with lead-acid.
Recycling is also an issue too. There’s still much work required to reproduce the well-developed and cost-effective recycling of lead-acid cells.
Are Li-Ion batteries in a UPS safe? Aren’t they a fire risk as seen with all those Samsung Galaxy Note 7s bursting into flames?
Li-Ion batteries employ a wide range of chemistries and structures, so they can be optimised for different applications. The batteries used in mission-critical systems use much more robust chemistry and packaging than the ultra-light cells found in hand-held devices.
Is battery monitoring important with Li-Ion?
An advanced battery management system (BMS) is mandatory for high-voltage Li-Ion applications, because each cell must be individually monitored and controlled using electronic circuits to maintain balanced states of charge.
Similar BMS are optional with SLA batteries, and although preferable in large-scale applications, they would incur significant additional capital costs.
Why do you think data centres haven’t fully embraced the energy storage potential of lithium-ion batteries?
It’s more a case of ‘seeing is believing’. Examples of success in other industries will hopefully give increased confidence to the power protection sector in the next few years. The deployment of 20 megawatt grid-connected energy storage systems, which are now fairly commonplace using Li-Ion, should also provide encouragement.
From banking to the NHS, airlines to the MoD, the list of expensive IT project failure shows no sign of reducing. And while some of the failure is underpinned by fundamental project management error, in many cases it is a lack of effective consumer consultation during the process that results in systems that simply fail to meet expectations. To safeguard against major failure, organisations need to consider a fundamental change of mindset to embrace continuous delivery, insists Alex Wolff, Head of Digital Labs, Softwire.
The cost of an IT failure can be devastating. From lost customers to destroyed reputation and damaged revenue stream, organisations simply cannot afford to deliver a solution that fails to meet customer needs. Yet despite technology innovation and a shift towards agile and DevOps, problems still occur. So what has to change?
The great benefit of DevOps is the chance to get close to the consumer; to innovate; and to fail fast. It takes organisations away from long drawn out, ‘in a box’ projects, that risk everything on one release. But alone it isn’t enough – clearly. Organisations need to embrace a completely different culture if consumer expectations are to be met successfully, again and again.
On its own, DevOps does little more than increase the transparency of a project: if the project isn’t working, the problem should become apparent far earlier in the process as a result of early release dates. Early warning does, of course, avoid the huge expensive failure – and provides a chance to remediate, and refocus development on core functionality. But it will not change the probability that customers will love what is being developed unless it is used as an enabler, if it underpins the process of releasing cheaply and more often.
To truly change the prospects of IT projects, development has to be driven by a far more frequent and interactive approach to the end consumer – from weekly update and feedback to cohort testing. And that requires not only a cultural shift in the development/consumer interaction model – from recruiting consumers to iterating and user prototyping and continually testing – but also a development model that can be achieved at a price point that enables very frequent release.
Continuous delivery is about structuring a project to enable additional features to be added without incurring unacceptable cost – not just creating the software but also managing the cost of the release, including manual testing, deploying and updating guides, regression testing, and so on. If the cost of delivering a release is too high, the only option is to fall back on the batch release model, limiting the opportunity to leverage the consumer learning from each release and increasing the risk every time.
The extension of the continuous development process beyond coding to include testing is key: it has to be largely automated to afford the desired fast and frequent release. Where is the value of a simple, one step release model if the process of integration testing and documentation testing requires multiple steps, often across multiple departments and business partners? Consideration of this aspect is fundamental to support the continuous development process – for example through the adoption of well-defined APIs that can support automated testing.
The adoption of continuous delivery is not binary – there are many steps on the road towards this approach, and each step will deliver ROI. The key is to begin to make the change; to start to embed the culture across the entire organisation, reduce the total cost to release, recruit consumers to the process and ensure learning is continuously caught and used to refine and improve what becomes by default a successful development.
The continuous diversification of exploitation methods and attack vectors – including email usage, web downloads, application vulnerabilities, BYOD Devices and smartphones connected to Wi-Fi and social networks – is challenging core network security assumptions of the past and making prevention-centric strategies obsolete. It has simply become unrealistic for security teams to expect to completely avoid having attackers enter their corporate network. By Andrew Bushby, UK director at Fidelis Cybersecurity.
As a result, we are seeing a clear paradigm shift in how organisations deal with cybercrime, away from only perimeter security and towards post-breach detection and resolution capabilities. This shift is largely positive, necessary, and in some ways unavoidable, but also comes along with its own challenges and potential pitfalls that security teams need to be wary of in order to ensure it actually bolsters their organisations security posture, rather than just increasing work load.
Post-breach detection as a time-waster
It is important to distinguish between what are now becoming outdated post-breach strategies and those that incorporate newer technologies, such as automated machine learning and deception. A major pitfall of some solutions is that they require security teams to manually handle alerts, which easily turns into a waste of time as they end up chasing false positives generated by all the layers of security intended to keep attackers out.
Forensic analysis of successful cyber-attacks shows that the critical time between infections, the first moments of attack, and detection is indeed far too great (dwell time) — often measured in months. By the time an organisation learns it is under attack, not to mention when they finally analyse the breach and assess the risk, the attacker has likely already made off with valuable assets. Deploying a post-breach detection system that lacks the capacity to automatically sort through alerts also stimulates ‘alert fatigue,’ where security professionals start to ignore warnings and risk the ‘true alarms’ being missed. It goes without saying that this can have extremely costly repercussions for businesses in terms of valuable assets being stolen or systems shutting down.
The promise of automation and deception technology
Fortunately, there are several tools for security teams to use that can help circumvent the pitfalls that may accompany post-breach detection. A major focus when mitigating against these weaknesses is to minimise the noise generated by the detection system. This can best be accomplished by setting in place a system that automatically discriminates between alarms, and only presents skilled security operators with those that warrant their immediate attention and expertise.
Sophisticated detection systems can also automatically provide instant visibility across the attack in a single pane of glass. In this way, focus can be shifted away from going through false alarms in the quest for real threats, towards almost exclusively analysing high-fidelity alerts related to actual breaches that defenders can take action against with confidence.
Another way to optimise the efficiency of post-breach detection is through using intelligent deception techniques. When deployed effectively, deception is effective both as a means of detection and as a proactive defence that can confuse cybercriminals at the earliest stages of an attack. Deception techniques, such as decoys and honeypots, trick cybercriminals into making themselves visible early, allowing for security teams to intervene against them before they are likely to have reached any real valuable assets. The speed that this brings to the process also leads to the reduction in possible dwell time resulting from breaches, saving businesses not only time but also money.
In the end, the sophisticated attack methods currently used by cyber criminals are leaving security teams with few other options than adopting post-breach detection strategies. While this will mean that businesses overall will see their security postures improve, there are potential issues associated with post-breach detection that security professionals need to be aware of from the get-go, but also some helpful tools, such as intelligent deception, that can help solve those same issues. Businesses therefore need to make sure that they weigh their options carefully when implementing new post-breach detection strategies, and be ready to meet the challenges that are likely to arise from them in a timely and effective manner.
Against the background of recent news that Cryptocurrency computing is swallowing up more energy than the populations of Iceland, Ireland and most African countries, we asked Alan Beresford, MD of EcoCooling, to share what he’s learned over the last two years working closely with the owners and operators of a new breed of data centres – ‘Cryptocurrency mining farms’.
You’ve probably heard of Bitcoin, the first and most infamous cryptocurrency. If not for its underlying blockchain technology, then most likely through constant reporting of the currency’s incredible volatility.
What you may not have heard of, however, is Ethereum, the No. 2 cryptocurrency. Would you believe there are now around 1380 other altcurrencies? All are based on variants of the blockchain cryptographic distributed ledger system, owned by no-one (as opposed to the central ledger approach used and owned by the banks).
In simplistic terms, the transaction ledger – I’ll use Bitcoin as my example – is distributed across hundreds of thousands of computers called ‘miners’. Every transaction has to be validated and recorded on every instance of the ledger across all of these mining computers. This, together with the cryptography involved to keep everything secret and secure, results in an absolutely massive computing load!
In fact, the global energy consumption is estimated to be 29 TWh (terawatt hours) in the last year! That figure would equate to an equivalent constant load of 3GW (gigawatts) running 24x7x365!
As an aside, because no entity owns or controls either the ledger or all the computers that crunch the data, the inventors came up with the idea that each of the 21 million possible Bitcoins would have to be ‘mined’ - albeit from ‘the Ether’ rather than being dug from the ground with pitchforks. Every so often, one of the owners of the hundreds of thousands of computers busy ‘mining’ will be awarded a Bitcoin – which is then said to have been ‘mined’. Just before Christmas 2017, one Bitcoin would have been worth $20,000 US. At the time of writing that’s fallen to $10,000 - but that’s still well worth running computers and paying the electricity bill!
From bedroom to data centre
Cryptocurrency was devised to be ‘of the people’, and so many miners started out with a computer in their bedroom or garage.
But these are not standard PCs or even standard Windows/UNIX servers. Most either use ASIC (application specific integrated circuit) processors or the GPUs (graphics processing units) that were originally designed for gaming consoles and graphic cards. Indeed, as a result of the rise in crypto-mining there is now a world shortage of GPU chips for gaming consoles! So, even the smallest mining computer is a high capacity, energy-hungry unit.
Many home-miners have moved from working solus (with a very low probability of a very high reward) and have instead joined ‘mining pools’ where all of the miners in the pool share out the ‘wins’ from mining. The result is that they all make lower but regular income in proportion to the processing power they supply to the mining pool.
The outcome of this has been that people have rapidly grown from one or two ‘miners’, to grabbing hold of unused warehouse and agricultural buildings and putting together ‘mining data centres’ with hundreds or thousands of mining computers. But these are often unlike any of the data centres you would recognise.
In some of these you will see conventional 19-inch racking, but many contain mining computers by the thousand on what is effectively warehouse shelving. And whereas we would normally expect a two-year lead time from design to ‘in-service’ for a conventional data centre, many of the projects we’ve been involved in have deployed 2MW or even 4MW of computer power in 16 weeks - start to finish!
That’s not to say all mining data centres are that basic. We’ve worked with a number of clients, particularly in the Nordics, who have built to low-tier data centre standards – but have had to be able to move in the 16 week get-ready timeframe demanded by the crypto-mining community.
Blockbase mining facility with miners and while being constructed in Sweden
Until recently, something like 80-85% of mining had been undertaken in China. But the World is changing. The combination of the Chinese authorities perceived opposition to cryptocurrencies, plus rising Chinese electricity prices, means that the West is becoming a more attractive location for miners.
Many of the western world’s larger crypto-mining facilities are in the Northern regions where there is cheap hydro power available. But crypto mining is happening everywhere, including throughout Europe and in the UK.
So, getting back to what we at EcoCooling have learnt over the last two years working in the sector:
ECT evaporative coolers cooling HPC equipment
We started to model the proportion of time our coolers would be in ‘evaporative-cooling’ mode and rapidly discovered it was was low to zero in the Nordics. Once customers realised that by deploying our ventilation-only products and employing free cooling (leaving out the evaporation unit) but, and this is essential, retaining all the intelligent air-mixing, they could save a lot of CapEx. This also drops the PUE to less than 1.05 meaning that the energy required for cooling falls to less than 5% of the compute load.
The low energy use has two benefits – low operating cost and lower power infrastructure capital cost - a massive boost for total cost of ownership. And, of course, reduced complexity means shorter installation times and reduced maintenance.
The cooling strategy for mining facilities is dependent upon the type of equipment and its location. Many of the hyperscale mining facilities are being built in the cold northern territories. In response to this, EcoCooling developed an arctic-grade fresh air cooler which requires no supplementary cooling at all for these climates.
It is only when crypto facilities are located further south that evaporative cooling will be needed. This normally eliminates any requirement for costly and complex refrigeration equipment.
One of the great benefits of EcoCooling as a company is our ability to innovate quickly. In the early days of our journey into the cooling of mining equipment, we could re-design our products (in concept at least) on the plane back and have a prototype ready to test onsite within a few days!
Of course after a while designs stabilised, and we’ve ended up with a completely modular plug-and-play solution-set that can scale from 35 kW to multi-megawatts of cooling in pure ’free’ ventilation, plus an equivalent modular set of combination ‘free-plus-evaporative’ cooling modules that is totally aligned to the needs of the mining sector.
3 cooler group – 250kW plug-and-play, 144 ASIC solution for rapid deployment mining. Assembly time 2hrs
Almost accidentally, we now have a suite of ventilation-only products that can be deployed in not only crypto mining, but also conventional data centres in Europe, UK and other temperate climates!
EcoCooling CloudCooler free cooled Containerised mining solution – 162 ASIC design capacity, plug and play
To summarise, the crypto data centre sector is very different to the conventional data centre market.
Very low TCO is key, with miners aiming to build their data centres at less than £500k per MW compared to the ‘norm’ of £2m to £10m per MW.
Plug-and-play modular solutions have proved to be the answer to the need for rapid deployment - and we’ve found that, by developing remote-commissioning and remote-support, we have reduced the installation to a low-skill process, easy for local labour to undertake. For example, a 250kW module including cooling, containment, racks and power distribution can now be installed by 2 people in less than 2 hours!
Reliability is also key, but balanced by the low total cost requirement, leads to a requirement for cooling solutions that are low energy, highly efficient, low maintenance and with a low infrastructure imprint.
We’ve discovered that intelligent direct ventilation (with high efficiency EC fans) provides the majority of the solution-set but that the intelligence aspect is absolutely critical to delivering the miner reliability levels required. Being able to augment this with low-TCO evaporative cooling completes the required product set.
The disruptive technology of cryptocurrency has led to a “gale of creative economic destruction” in the approach to building data centres. Conversations regarding data centre construction are very different now!
This new approach will probably form the foundation to support the stratospheric growth in processing demands from the emerging technologies of blockchain, artificial intelligence, machine learning, IoT, virtual reality and augmented reality. This is the future!