Working across a range of business titles, it’s quite easy to forget that, while some industry sectors are fully on-board with the digital world, plenty of others are only now just beginning to discover what is not just possible but will shortly be compulsory if many companies are not to struggle as disruptors change the rules. Obviously, everyone in the IT space understands Cloud, IoT, AI, ML, AR and VR, DevOps, containers, 5G and the like.
However, when stepping from our Digitalisation World (DW) magazine and website back to the world of data centres, I’m only too aware that this industry sector, outside of the household name hyperscalers, has a little bit of catching up to do in two areas.
Firstly, the potential of many of these new ideas and technologies to help transform the data centre environment is fairly spectacular. As yet, there doesn’t seem to be any data centre colo or hosting organisation that leveraged ‘digitalisation’ to put itself head and shoulders above the pack – and maybe it isn’t necessary right now as there’s still a very high demand for new data centre space, almost whatever the quality. However, at some stage, the data centre providers will start to have to use the new technologies to distinguish themselves and it will be interesting to watch this race to the top.
Secondly, while no one quite knows how, when or where, there’s no doubt that IoT, VR and AR, AI and various other digital ideas will have a massive impact on the demands being placed on data centre infrastructure. The sheer quantity and size of the data sets that will need to be housed, accessed, stored, retrieved, archived is going to go mad. There’s going to need to be much more granularity around data centre infrastructure quality of service. In crude terms, right now we have fast, medium-pace and slow data. In the matured digital world, we’re going to have many more levels of service, depending on the importance of the data at any particular moment in time.
Up until now, I think it’s fair to say that data centres have ‘bumbled’ along, evolving gradually as new ideas come along and make new demands on the infrastructure, but not at any great rate of change. Looking ahead, I can’t help feeling that we might just need to witness something of a data centre revolution if digital potential is to become reality.
Fasten your seatbelts and enjoy the (autonomous?) ride!
New Cognizant report finds 60% of senior IT executives claiming there are more cyber threats in their organisation than they can currently control.
A new report by Cognizant’s Center for the Future of Work, Securing the Digital Future, reveals that, in the pursuit of digital transformation, organisations have overlooked one critical factor that could put all their transformation efforts - and even share prices- into jeopardy: cybersecurity.
The research, which surveyed over 1,000 senior IT executives in 18 countries, found that only 9% of organisations have made cybersecurity a board-level priority. This is despite respondents acknowledging that digital is opening their businesses to more cybersecurity vulnerabilities than ever, with 60% of respondents saying there are more emerging cyber threats than they can currently control.
The report found that cybersecurity vulnerabilities stem from a range of sources, including not only technology itself, but also the design and execution of business processes and, employees within the organisation. Respondents believe that migrating data to the cloud (74%), social media (66%) and careless employees (64%) pose the highest risk to businesses in the next 12 months, stating that they need to be addressed now to bolster their organisation’s security.
Rather worryingly though, over 60% of respondents believe they have inadequate resources (namely access to cybersecurity talent due to staffing budget issues) to address gaps in the business’s cyber defences. As a result of this shortage, unsurprisingly, almost a third (31%) also admit they only refresh their cybersecurity strategies on an annual basis, potentially leaving glaring gaps in their cyber defences.
Combined with fast-changing threats, this talent and budget shortage has many organisations looking to technologies, particularly artificial intelligence (AI)-driven automation, to improve their cybersecurity outlook. However, while technology can close the gap, it cannot solve the security short fall alone.
Future-proofing digital operations
The study identified four critical elements that organisations can follow to bolster their cybersecurity strategies, allowing them to future-proof digital operations:
Euan Davis, European Lead for Cognizant’s Center for the Future of Work said: “While not a silver bullet, the introduction of AI tools into cybersecurity platforms will spur organisations to rethink how they approach cybersecurity and reduce the burden left by talent shortages. Cybersecurity needs to be an ongoing endeavour however, and failure to adapt processes and systems on a regular basis will leave an organisation open to further attacks.
“Leadership must take the initiative when it comes to ensuring this is embedded into the business’s DNA, or else face losing customers, reputation and revenue. Ultimately, any company that hopes to do business in the digital economy must make cyber defences a key part of their business strategy.”
Nearly half of employees (45%) have accidentally shared emails containing bank details, personal information, confidential text or an attachment with unintended recipients.
New research by data security company, Clearswift, has shown that 45% of employees have mistakenly shared emails containing key data with unintended recipients, including personal information (15%), bank details (9%), attachments (13%) and other confidential text (8%).
The research, which surveyed 600 senior business decision makers and 1,200 employees across the UK, US, Germany and Australia, also found that employees regularly receive these unintentional emails, as well as being guilty of sending them, highlighting an inbound and outbound opportunity for data leakages. 27% of employees claim to have received emails containing personal information in error from people outside of their company, with 26% also admitting to receiving attachments in error and 12% saying they had wrongly received personal bank details.
“With GDPR, the new tenet of shared responsibility makes the problem of receiving and sharing unauthorised information a serious issue. Email communication is a real pitfall for organisations trying to comply with the regulation”, said Dr Guy Bunker SVP products at Clearswift.
“Stray bank details and ‘hidden’ information in attachments, spreadsheets or reports can create a serious data loss risk. The occasional email going awry may seem innocuous, but when multiplied by the amount of employees within a business, the risk becomes more severe and could lead to a firm falling foul of the new GDPR penalties; up to 4% of global turnover, or even those in place already, such as The Payment Card Industry Data Security Standard. If contravened this can lead to a firm having the ability to process data removed, which could see some businesses grind to a halt.”
The research also found that upon receiving a misplaced email, 31% of employees said that they would read the email, with 12% even admitting they would scroll through to read the entire email chain. 45% of employees did say that they would alert the sender to their mistake, giving them the opportunity to take some action, however a lowly 27% said they would delete the email from their inboxes and deleted items leaving an element of uncertainty.”
Less than half (45%) of employees were familiar with the agreed process or course of action to take upon receiving an email from someone in another company where they were not the intended recipient, and 22% admitted there was no formal process in place whatsoever in their organisation for such situations.
Bunker added, “To offset the inevitable risk associated with email communications, companies need a clear strategy, which encompasses people, processes and technology.”
“Instilling the values of being a ‘good data citizen’ can engender a sense of data consciousness in the workplace, ensuring that employees are aware of responsible disclosure, and with whom this responsibility sits upon receiving an email in error. However, a formally agreed process or course of action is also a must. There is not a silver bullet and technology can once again offer assurances to help mitigate risks. Adaptive Data Loss Prevention (DLP) technologies can automate the detection and protection of critical information contained in emails and attachments, removing only the information which breaks policy and leaving the rest to continue on to its destination.”
A study of two data centres found that utilising higher temperatures resulted in energy savings of between 41% and 64%, whilst driving significant improvements in PUE.
Energy efficiency is an issue that concerns all who are involved with the design and operation of data centres. The cooling function in general, and the operation of water chillers in particular, are large consumers of power and as such, require focused efforts to improve overall energy efficiency.
Water chillers account for between 60 and 85% of overall cooling-system energy consumption. Consequently, data centres are designed, where possible, to keep usage of chillers to a minimum and to maximise the amount of available “free cooling”, in which less power-hungry systems such as air coolers and cooling towers can keep the temperature of the IT space at a satisfactory level.
One approach to reducing water chiller energy consumption is to design the cooling system so that a higher outlet water temperature (CHW) from the chillers can be tolerated while maintaining a sufficient cooling effort. In this way, chillers consume less energy by not having to work as hard, and the number of free cooling hours can be increased.
As with any complex system, attention needs to be paid to all parts of the infrastructure, as changes in one area can have direct implications for another. A new White Paper from Schneider Electric, the global specialist in energy management and automation, examines the effect on overall cooling system efficiency by operating at higher chilled water temperatures.
White Paper #227, “How Higher Chilled Water Temperature Can Improve Data Center Cooling System Efficiency”, outlines the various strategies and techniques that can be deployed to permit satisfactory cooling at higher temperatures, whilst discussing the trade-offs that must be considered at each stage, comparing the overall effect of such strategies on two data centres operating in vastly different climates.
Among the trade-offs discussed were the need to install more air-handling units inside the IT space to offset the higher water-coolant temperatures, in addition to the need for redesigned equipment such as coils, to provide adequate cooling efforts when CHW (chilled water temperature) exceeds 20C. The paper also advises the addition of adiabatic, or evaporative, cooling to further improve heat rejection efficiency. Each approach requires an additional capital investment, but results in lower long-term operating expenses due to the improved energy efficiency.
White Paper 227 details two real-world examples in differing climates; the first is in a temperate region (Frankfurt, Germany) and the second in a tropical monsoon climate (Miami, Florida). In each case, data was collected to assess the energy savings that were accrued by deploying higher CHW temperatures at various increments, whilst comparing the effect of deploying additional adiabatic cooling.
The study found that an increased capital expenditure of 13% in both cases resulted in energy savings of between 41% and 64%, with improvements in TCO between 12% and 16% over a three year period.
Another inherent benefit of reducing the amount of energy expended on cooling is the improvement in a data centres PUE (Power Usage Effectiveness) rating. As this is calculated by dividing the total amount of power consumed by a data centre by the power consumed by its IT equipment alone, any reduction in energy expended on cooling will naturally reduce the PUE figure.
The Schneider Electric study found that PUE for the two data centres examined was reduced by 14% in the case of Miami and 16% in the case of Frankfurt.
76% of organisations across Europe have increased IT and technology budgets this year.
Toshiba reveals that IT and technology budgets within European businesses will increase this year for more than three quarters (76%) of organisations. This rise in IT spend is directly linked to the number of remote workers within businesses, with those companies with higher numbers of remote workers indicating greater increases in their investment in new solutions and technologies.
The study of more than 1,000 senior IT decision makers from medium and large organisations, which was conducted in partnership with Walnut Unlimited, demonstrated that priorities for this increased investment are focused on `data security (62%), cloud-based solutions (58%) and improving productivity (54%). When compared to similar IT decision maker research conducted by Toshiba back in 2016, data security has increased in terms of importance (54% in 2016), as has investment in cloud-based solutions, with 58% of organisations considering it a top priority today compared to 52% in 2016.
While all country markets surveyed (UK, France, Germany, Spain and Benelux) saw an increase in IT spend, Spain demonstrated the most significant change, with 86% of organisations indicating an increased IT and technology budget for the next twelve months. Similarly, businesses in the transport and logistics sector were the most likely to have an increased budget (89%), while only 52% of government and public sector organisations noted that there would be a larger spend on IT and technology.
Security concerns and evolving working patterns
Offering employees flexibility in their working patterns continues to be of paramount importance to organisations across Europe. The study reveals that 68% of respondents said they had at least a tenth of their employees work primarily while travelling or in no fixed location.
This increase in flexible working is a clear driver behind the top three investment priorities being data security, cloud-based solutions and improving productivity. When asked about priorities for improving productivity for this increasingly mobile workforce, almost half (47%) of respondents indicated that better employee training was critical, with 43% of respondents stating that more innovative use of digital tools was a priority.
Technology to support remote and frontline workers
To help ensure worker productivity, regardless of where employees are working from, there is a distinct shift in the solutions IT decisions makers are rolling out across their organisation. At present, 61% of respondents indicated that they provide laptops for their remote teams and 55% offer business-provided smartphones. However, when asked what devices will be used most over the next three years, smartphones caught up with laptops (both at 38%) and businesses also indicated an appetite for newer technologies such as mobile edge computing devices (10%) and thin/zero client solutions (9%).
Larger enterprises (500+ employees) are set to lead the way when it comes to rolling out wearables in the workplace, with 24% predicting that a smart glasses solution will be rolled out for employees within the next 12 months. This is compared to just 16% of respondents from organisations with 100-499 employees. 82% overall predict that smart glasses will be used within their business in the next three years.
The drivers behind the uptake of enterprise smart glasses use include the arrival of 5G, as referenced by 40% of respondents. Furthermore, 59% of those working in the manufacturing sector stated the hands-free functionality as a key benefit of rolling out smart glasses to employees.
Maki Yamashita, Vice President, B2B PC, Toshiba Europe comments, “While the technologies available to employees are constantly evolving, it’s really interesting to see that the key challenges that IT decision makers are looking to address have remained relatively constant when compared to opinions in 2016. Organisations are continuing to balance how best to achieve the perfect blend of unhindered mobile productivity, while being protected by a robustly secure IT infrastructure. New solutions coming into the enterprise are helping to achieve this, but IT teams need to focus on the varying challenges and benefits for their individual sectors when determining how best to make these solutions work for their business.”
“Industry is precisely at that tipping point where the physical environment of computing gives way to the virtual world of cloud and its associated enabling technologies”, Apay Obang-Oyway, Director of Cloud & Software, UK&I at Ingram Micro comments.
He continued: “This is not just in bold initiatives here, and ingenious transitions there, not just from pioneers within certain verticals, and visionary disruptors – but for every organisation, everywhere, in every industry”.
Research that Ingram Micro conducted in mid-2017 showed that 71% of organisations believed they would be embarking on a conversion of being digitally transformed within the organisations’ DNA by 2018. According to Apay, underpinning this entire revolution is the cloud, and it is without doubt the single most transformative element in this radical rethinking of the way we work, live and play every day. “I suggest that cloud is no longer a journey – a view that implies new departures and sometimes extended good-byes to all that went before – it’s where we are, where we have arrived,” he commented.
“We have reached our destination. We can consider ourselves in a place where agility, scalability and speed are no longer qualities we discuss with awe, expectation, and anticipation. They are just part of the fabric of what we are, the infrastructure of what we do, the platform from which we spring off and up into achievements that not many years ago were no more than dreams and aspirations,” Apay added.
He believes that humankind will look back on 2017 in particular as a year that “brought such acceleration to change, that we’ll very soon be looking back on it like one of those epoch-defining moments where we all remember exactly what we were doing when they happened.”
Apay continued: “There are four key elements of the Fourth Industrial Revolution that fuse together to ignite infinite possibilities we can view as a cohesive change in how we do things. I for one do not believe that the word ‘revolution’ is in any way an overclaim for this period we’re embarking upon. The exciting and fundamental factor in this revolution is being driven by the channel, and Ingram Micro continues to play an essential role in enabling partners to realise these opportunities.”
According to Apay, the four elements are:
The Internet of Things
There is an overwhelming amount of data being produced by a world that has over 8.3 billion connected devices, and the average UK household has an average of 8.3 connected devices.
We can do two things with the resultant data: we can ignore it, or just store it to satisfy regulators, or we can interact with it to get smarter at what we do, how we do it, and how well we deliver on our customers’ expectations and needs. Also, none of this can be done without artificial intelligence.
Artificial Intelligence (AI)
Many organisations still have little understanding and those that do have inherent fears around AI, relating primarily around the security issues of AI-dependent networks. Yet AI is critical to the new world of the fourth revolution, especially in making sense of the vast volumes of data any organisation can take advantage of if it has the right tools to make sense of it securely.
These are ‘legacy fears’. They date back to before we knew how robust and secure the cloud can be. Such fears fail to embrace solutions which leverage the “shared security model” as well as understanding the value of Infrastructure as a Service and Platform as a Service.
Automation
Far from reducing the demand for skilled labour, it will increase it. Automation is not about the invasion of the droids, the assumption of human characteristics by robots who we get to know and love.
It’s about making tedious processes faster and removing the human element to drive cost-effectiveness and to serve customers better. It’s a tool to embrace, not an entity to distrust.
Cloud Native verticals
We’ve been accustomed to seeing agile start-ups disrupting all types of businesses, while bigger players contend with legacy investments. An essential step in any digital transformation strategy has always been to migrate to the cloud or to deploy hybrid strategies, moving partly to the cloud and retaining critical operations down here in the physical sphere. Once again this is legacy thinking. There is value in “Cloud First” strategies in defining and empowering successful organisations.
The role of the myriad constituents of the channel is to augment Cloud, AI, IoT and BigData to create and accelerate organisational value in this fourth industrial revolution. These themes and content will be further explored at this year’s Ingram Micro UK Cloud Summit.
A recent article by Steve Gillaspy of Intel outlined many of the challenges faced by those responsible for designing, operating, and sustaining the IT and physical support infrastructure found in today's data centers. This paper targets four of the five macro trends discussed by Gillaspy, how they influence the decision making processes of data center managers, and the role that power infrastructure plays in mitigating the effects of the following trends.
Outpost24 survey reveals security professionals have least confidence in the security of the cloud infrastructure and most confidence in their owned infrastructure and data centres
LONDON, U.K. - May 10, 2018 – Outpost24, a leading provider of Vulnerability Management solutions for commercial and government organisations, today announced the results of a survey of 155 IT professionals, which revealed that 42 percent ignore critical security issues when they don’t know how to fix them (16 percent) or don’t have the time to address them (26 percent).
The survey, which was carried out at the RSA Conference in April 2018, also asked respondents what area of their IT estate they consider to be the least secure. This revealed 25 percent are most concerned about their cloud infrastructure and applications, 23 percent are most concerned about their IoT devices, 20 percent said their mobile devices, 15 percent said their web applications, while 13 percent were most concerned about their data assets, databases and shares. Owned infrastructure and data centres seems to cause the least concern, with only five percent saying they were least secure.
Additionally, when survey respondents were asked how quickly their company remediates known vulnerabilities, 16 percent stated they review their security at a set time every month, seven percent said they do it every quarter, however a worrying five percent said they only carry out assessments and apply fixes once or twice a year. Only 47 percent of organisations patch known vulnerabilities as soon as they are discovered.
“The trend lines have already been drawn, and we can see from the survey results that they are not improving,” said Bob Egner, VP at Outpost24. “Our survey results suggest that businesses are adding technology as a key element of their strategy but not preparing their security teams with the skills and resources to keep up. It’s vital that organisations have full awareness of all assets that the business relies on, and that they are constantly tuning for the lowest possible level of cyber security exposure.”
Respondents were also asked if security testing is conducted on their enterprises systems, which revealed that seven percent fail to conduct any security testing whatsoever, however, reassuringly, 79 percent of respondents said they do carry out testing. Respondents were also asked if their organisation had hired the services of penetration testers and 68 percent revealed they had. The study also found that of those organisations that had hired penetration testers, 46 percent had uncovered critical issues that could have put their business at risk.
Egner added: “Outsourcing services like penetration testing can be an excellent way to get a holistic overview of the cyber security exposure across an organisation’s assets as well as expose threats within systems that may well have gone unnoticed. To maximize the value of testing investment, remediation action should be taken as close to the time of testing as possible. With the proliferation of connected technologies, the knowledge and resource gap continue to be key challenges. Security staff can easily become overwhelmed and lose focus on the remediation that can be most impactful to the business.”
Take a New Look at 3-Phase Power Distribution.
Alternating phase outlets alternate the phased power on a per-outlet basis instead of a per-branch basis. This allows for shorter cords, quicker installation and easier load balancing for 3-phase rack PDUs. Shorter cords mean less mass, making them less likely to come unplugged during transport of the assembled rack.
This year’s DCS Awards – as ever, designed to reward the product designers, manufacturers, suppliers and providers operating in the data centre arena – took place at the, by now, familiar, prestigious Grange St Paul’s Hotel in the City of London. Host Paul Trowbridge coordinated a spectacular evening of excellent food, a prize draw (with champagne and a team of four at the DCA Golf Day up for grabs), excellent comedian Angela Barnes, music courtesy of Ruby & the Rhythms and, of course, the reason almost 300 data centre industry professionals attended – the awards themselves. Here we salute the winners.
The next Data Centre Transformation event, organised by Angel Business Communications in association with DataCentre Solutions, the Data Centre Alliance, The University of Leeds and RISE SICS North, takes place on 3 July 2018 at the University of Manchester.
The programme is nearly finalised (full details via the website link at the end of the article), with some some top class speakers and chairpersons lined up to deliver what is probably 2018’s best opportunity to get up to speed with what’s heading to a data centre near you in the very near future!
For the 2018 event, we’re taking our title literally, so the focus is on each of the three strands of our title: DATA, CENTRE and TRANSFORMATION.
This expanded and innovative conference programme recognises that data centres do not exist in splendid isolation, but are the foundation of today’s dynamic, digital world. Agility, mobility, scalability, reliability and accessibility are the key drivers for the enterprise as it seeks to ensure the ultimate customer experience. Data centres have a vital role to play in ensuring that the applications and support organisations can connect to their customers seamlessly – wherever and whenever they are being accessed. And that’s why our 2018 Data Centre Transformation Manchester will focus on the constantly changing demands being made on the data centre in this new, digital age, concentrating on how the data centre is evolving to meet these challenges.
We’re delighted to announce that GCHQ have confirmed that they will be providing a keynote speaker on their specialist subject – security! Has IT security ever been so topical? What a great opportunity to hear leading cybersecurity experts give their thoughts on the issues surrounding cybersecurity in and around the data centre.
We’re equally delighted to reveal that key personnel from Equinix, including MD Russell Poole, will be delivering the Hybrid Data Centre keynote addresss. If Adam knows about cybersecurity, it’s fair to say that Equinix are no strangers to the data centre ecosystem, where the hybrid approach is gaining traction in so many different ways.
Completing the keynote line-up will be John Laban, European Representative of the Open Compute Project Foundation.
Alongside the keynote presentations, the one-day DCT event will include:
A DATA strand that features two workshops - one on Digital Business, chaired by Prof Ian Bitterlin of Critical Facilities and one on Digital Skills, chaired by Steve Bowes Phipps of PTS Consulting.
Digital transformation is the driving force in the business world right now, and the impact that this is having on the IT function and, crucially, the data centre infrastructure of organisations is something that is, perhaps, not as yet fully understood. No doubt this is in part due to the lack of digital skills available in the workplace right now – a problem which, unless addressed, urgently, will only continue to grow. As for security, hardly a day goes by without news headlines focusing on the latest, high profile data breach at some public or private organisation. Digital business offers many benefits, but it also introduces further potential security issues that need to be addressed. The Digital Business, Digital Skills and Security sessions at DCT will discuss the many issues that need to be addressed, and, hopefully, come up with some helpful solutions.
The CENTRES track features two workshops on Energy, chaired by James Kirkwood of Ekkosense and Hybrid DC, chaired by Mark Seymour of Future Facilities.
Energy supply and cost remains a major part of the data centre management piece, and this track will look at the technology innovations that are impacting on the supply and use of energy within the data centre. Fewer and fewer organisations have a pure-play in-house data centre real estate; most now make use of some kind of colo and/or managed services offerings. Further, the idea of one or a handful of centralised data centres is now being challenged by the emergence of edge computing. So, in-house and third party data centre facilities, combined with a mixture of centralised, regional and very local sites, makes for a very new and challenging data centre landscape. As for connectivity – feeds and speeds remain critical for many business applications, and it’s good to know what’s around the corner in this fast moving world of networks, telecoms and the like.
The TRANSFORMATION strand features workshops on Automation (AI/IoT), chaired by Vanessa Moffat of Agile Momentum and The Connected World. together with a Keynote on Open Compute from John Laban, the European representative of the Open Compute Project Foundation.
Automation in all its various guises is becoming an increasingly important part of the digital business world. In terms of the data centre, the challenges are twofold. How can these automation technologies best be used to improve the design, day to day running, overall management and maintenance of data centre facilities? And how will data centres need to evolve to cope with the increasingly large volumes of applications, data and new-style IT equipment that provide the foundations for this real-time, automated world? Flexibility, agility, security, reliability, resilience, speeds and feeds – they’ve never been so important!
Delegates select two 70 minute workshops to attend and take part in an interactive discussion led by an Industry Chair and featuring panellists - specialists and protagonists - in the subject. The workshops will ensure that delegates not only earn valuable CPD accreditation points but also have an open forum to speak with their peers, academics and leading vendors and suppliers.
There is also a Technical track where our sponsors will present 15 minute technical sessions on a range of subjects. Keynote presentations in each of the themes together with plenty of networking time to catch up with old friends and make new contacts make this a must-do day in the DC event calendar. Visit the website for more information on this dynamic academic and industry collaborative information exchange.
This month’s journal theme is centred on Energy Efficiency and I would like to take the opportunity to thank all members who contributed to this edition.
By Steve Hone, CEO & Founder The DCA
For the past 36 months the DCA together with seven other DCA strategic partners have been working on a joint Horizon 2020 project called EURECA. This represents the second of three projects the data centre sector has secured research funding for as a direct result of DCA involvement. The bids to apply for projects take literally hundreds of combined hours and the team was up against some very stiff competition - only 1-50 bids receive funding! This is why as a trade association, we are so proud when these efforts pay off to the benefit of the whole data centre sector.
The project call came from the Horizon 2020 innovation and research programme dealing with the adoption of energy efficient best practice and market uptake of energy efficient products, services within Europe’s Public-Sector organisations. Although originally focused towards the public sector the services developed are of equal value to the private sector as well as they share the same challenges and are all equally looking for best practice guidance and independent support for their IT transformation projects; support which EURECA can provide. For more information visit the website www.dca-global.org/research.
“Member support and strong collaboration with Strategic/Academic Partners were very much key to the success of this project with all energy saving targets and KPIs being met”; so I would like to take the opportunity to send my personal thanks to Mark Acton from CBRE, John Booth from Carbon3IT, Dr Frank Verhaegen from Certios, Mark Andre Wolf from Maki Consulting, Dr Jon Summers, Zaak, Esther and Julie from Green IT Amsterdam and finally but no means least Prof Rabih Bashroush and his team at the University of East London as project coordinators. This was a great team effort and a clear demonstration of the value a trade association such as the DCA can bring to the table and the benefits of working together, sharing knowledge and promoting best practice. As Anton Chekhov said “knowledge is of no value unless you put it into practice”.
By the time you read this forward we will be well into May so if you have missed out on some of the great conferences in the first half of the year there are still lots more coming up you should consider attending. We have Datacloud Europe in Monaco 12-14th June so hopefully by the time you are reading this you will still have time to register to attend, this is then followed by the DCA Trade Associations own annual workshop-based conference DCT 2018 (Data Centre Transformation) hosted in conjunction with Data Centre Solutions in July.
For the first time DCT2018 will be hosted at Surrey University on the 5th July as well as Manchester University on the 3rd July this is to provide DCA members and delegates every opportunity to attend and benefit from the educational workshop/network sessions taking place irrespective of where you are based.
A full list of all the events the DCA host, sponsor, endorse or promote can be found on the DCA website www.dca-global.org/events. We can then all can enjoy a short summer recess to take a breath and recharge for what will be a busy end to the year with events in Ireland, Nordics, Africa, Singapore, Frankfurt, Paris and London all supported by the data centre trade association, we look forward to seeing you at an event near you!
The DCA
W: www.dca-global.org
T: 0845 873 4587
E: info@dca-global.org
By Janne Paananen, technology manager at Eaton EMEA
Over the past decade, data centres have become one of the biggest culprits when it comes to electricity consumption. So much so that just last year, the world’s data centres used 416.2 terawatt hours of electricity – which is higher than the UK’s total energy consumption. This leap in power usage has been a direct result of the consistently evolving technological landscape, especially as both businesses and consumers demand greater connectivity and technologies such as the Internet of Things (IoT) continue to expand.
It is estimated that by 2025, data centres could be using 20 per cent of all available electricity in the world. Furthermore, if current projections are true, it’s likely that the world’s data centres’ energy consumption will triple over the next ten years. This increasing demand for energy will result in major environmental issues unless companies begin powering their data centres in a more sustainable way. If not, experts believe that by 2040, the ICT industry will be responsible for more than 14 per cent of global carbon emissions.
While it’s unnerving that data centres have the ability to impact the environment in such a major way, it’s clear that the industry has taken notice and is shouldering the responsibility of reducing its combined carbon footprint as well as decreasing overall energy usage. According to a recent survey by Data Center Knowledge, 70 per cent of users consider sustainability issues when selecting data centre providers. Beyond that, about one third of those who take sustainability into consideration believe it’s very important that their data centre providers power their facilities with renewable energy.
With that in mind, many data centre providers have already taken various steps to reduce their energy usage while prioritising efficiency. Google is a great example of this. The tech giant has recently pledged that between 2020 and 2025, all of its operations will be powered by renewable energy. And they’re not the only FAMGA members making big promises – both Facebook and Apple have made similar statements in recent months.
It’s reassuring to see that so many companies – especially major players in the industry – are making changes to become more sustainable. This self-awareness and push for alternate energy sources is working. The demand for green energy is changing our energy mix – just last year, 24 per cent of global electricity demand was produced through renewables such as wind, solar and hydro-power. Unfortunately, there’s still a catch – renewable energy generation can be intermittent.
Managing intermittency
At first, intermittency can sound like a major concern. As the energy market moves away from traditional fuel-based energy towards renewable energy, production will become more volatile, making it harder to predict – and therefore manage – electrical supply.
As data centres rely on a stable and reliable supply of energy, this instability is not ideal.
To diminish the risk, data centres can help energy providers maintain power quality by balancing consumption with power generation. On a global scale, providers within the energy sector need to work with organisations to respond quickly to grid-level power demands and keep frequencies within manageable boundaries to avoid grid-wide power outages.
Moving away from risk management and towards an extra revenue stream, data centres could be given a financial incentive to manage their demands on the grid, and even act as a power supplier. For example, data centres could make money from their existing investment in an uninterruptible power supply (UPS) by helping energy providers balance sustainable power demands by offering capacity back to the grid – without compromising critical loads. The UPS, which uses stored power in the event of a power failure, can be used to regulate demand from the grid, as well as for up and down stream charging, essentially to discharge the battery back to the grid.
This is becoming known as UPS-as-a-Reserve (UPSaaR). By putting data centres in control of their energy distribution and giving them the power to select how much capacity – and at what price – to offer back to the grid, they’re able to make a significant return on investment. This would be a big step towards the more sustainable management and use of energy, as well as putting money into the pockets of data centre providers, demonstrating a better ROI for their services.
Data centres have the potential to support the global transition to a low carbon economy. By working together, data centres and energy providers can balance consumption – creating a well-established and thought out approach to energy generation and a greener future.
A multisectorial approach to energy efficiency
By Vasiliki Georgiadou, Project Manager, Green IT Amsterdam
The continuous infusion of IT services in our daily lives, with the emergence of the Internet of Things (IoT), distributed data centres and the cloudification of legacy computer systems, brings data centres to the front lines. Data centres are often, and accurately, perceived as critical infrastructures of our times with concerns regarding their energy consumption dominating public discussions. Partially such concerns are very much valid indeed. Their energy consumption in EU is predicted to reach 104 TWh in 2020, after all. And energy is a precious commodity.
As such, to ensure sustained availability, reliability and security of Europe’s critical infrastructures, data centres continuously reinforce their investments towards energy efficient business innovation. However, with the highly efficient and ever-evolving cooling technologies available along with IT consolidation and virtualization techniques, PUE focused energy reduction and efficiency solutions do no longer offer high returns.
For sure the usefulness of a data centre resides in the data processed, stored and transferred within and outside its boundaries. And although difficult at times to measure uniformly among data centres, the industry has made leaps and bounds on handling their core business effectively.
Nevertheless, a fundamental viewpoint, so far overlooked in the mainstream discussions, must be considered: at the end of the day, a data centre is nothing else but a system where electricity comes in and heat comes out. Heat that in most cases is being just rejected to the surrounding environment, wasted. But looking at the energy flows within a data centre a new series of solutions can emerge.
A data centre can optimize both its design and operations to deliver heat to local heating (and cooling) networks. Recover, redistribute and reuse their residual heat for building space heating (residential and non-residential such as hospitals, hotels, greenhouses and pools), service hot water and industrial processes.
Depending on the cooling technology in use, the data centre may harvest heat at the desirable temperature level. In any case, a heat pump may be in place to increase as necessary the low calorific heat generated by the data centre before its delivery to the heat grid. The data centre may also be able to adjust its server room temperature set points to increase the amount of thermal energy generated. A data centre may capitalize on the use of a heat storage, such as a thermal energy storage system, to store heat during the summer and deliver it to the heat grid during the winter in addition to the direct heat normally supplied, increasing thusly its heat capacity.
Source: Datacentres connected to an intelligent energy network, M. Arts, Z. Ning (Royal Haskoning DHV), TTVL Magazine, 2016 DATACENTRES. (Publication in Dutch)
The main barrier in these scenarios is actually raised by local policies, operations and infrastructures that may or may not be in place to enable recovery, redistribution and reuse of residual heat.
So, it also falls on the shoulders of the local communities and area developers. There are examples where indeed all stakeholders work together to ensure data centres can integrate their own operations to the needs and wants of other sectors, linking their commons (B2B). Such an example is the Green Datacenter Campus in the Amsterdam Metropolitan Area, with the Schiphol Area Development Company (SADC), a Green IT Amsterdam participant, orchestrating the efforts [https://www.sadc.nl/en/locations/green-datacenter-campus/]. Other communities are also offering similar solutions each of them leveraging on their unique geographical, technological and business characteristics. For example, at Stockholm Data Parks [https://stockholmdataparks.com/], a data centre is by default offered the opportunity to plug into the city’s district heating network (B2C).
Heat services are not however the only answer. Data centres have actually the potential to offer a diverse portfolio of energy related services by exploiting their IT operations and power and cooling infrastructure to also participate at emerging electricity, heat and energy flexibility markets. Following this line of thought, the next generation of data centres should, by design, utilize resources effectively, while ensuring seamless integration with their smart city ecosystem: smart grids and heating networks. In this context, once again simply focusing on energy reduction and efficiency practices applied only within the boundaries of one’s data centre, is no longer an option for those with the ambition to own and operate the Green Data Centres of the future. Silos must be broken down for data centres to reach their full potential capitalizing on their unique position as overlaying multiple networks: IT, electricity and heat.
Such is the frame of reference for the EU H2020 CATALYST project [http://project-catalyst.eu/] that aspires for data centres to become flexible energy hubs, which can sustain investments in renewable energy sources and energy efficiency. Leveraging on results of past projects, CATALYST will adapt, scale up, deploy and validate an innovative technological and business framework that enables data centres to offer a range of mutualized energy services to both electricity and heat grids, while simultaneously increasing their own resiliency to energy supply.
Mutual energy services will be consisting of energy flexibility, security and optimized management tailored to data centre operators and targeting at managing the available non-grid renewable (PV, local storage, heat pumps) and non-renewable (backup generators) energy assets as well as the IT assets (via cloud-based geo-spatio-temporal IT virtualization). Such energy services will be provided by data centres through appropriate open, standardized energy flexibility marketplaces, based for example on market models as defined by the Universal Smart Energy Framework (USEF) [https://www.usef.energy/]. These marketplaces may be instantiated either as mono-carrier energy marketplaces (electricity vs heat marketplace) cleared sequentially, or as multi-energy marketplaces. Along this innovative value chain, new stakeholders will be willing to provide such energy services to data centres, like ESCOs, energy suppliers, aggregators, IT and cloud solution and technology providers. Cross-energy carrier synergies among electricity and heat can also be exploited and managed with a view to leverage and exploit flexibility potential of one energy carrier to offer energy services to another.
In this way, the CATALYST vision introduces a “Marketplace as a Service” (MaaS) instantiated in three emerging and innovative data centre revenue streams and markets: a) IT workload b) Electricity & Heat and c) Energy Flexibility.
To reach this vision however it is imperative that the data centre and energy sectors are brought closer together and start talking the same language. The newly launched Green Data Centre Stakeholder Group, established by the CATALYST consortium, aims to do just that. For data centres to take up a pivotal role in energy transition, will bring opportunities for energy efficient data centres to not only reduce their operating costs and improve their performance and efficient use of resources, but also create new revenue streams through waste energy reuse and energy flexibility services offerings.
Green IT Amsterdam is a non-profit organization that supports the wider Amsterdam region in realizing its energy transition goals. Our mission is to scout, test and showcase innovative IT solutions for increasing energy efficiency and decreasing carbon emissions. We share knowledge, expertise and ambitions, for achieving these sustainability targets with our public and private Green IT Leaders. Follow us on twitter @GreenItAms; visit our website http://www.greenitamsterdam.nl/.
By Colin Dean, Managing Director, Socomec U.K. Limited
The resilience of a Data Centre - the ability to remain operational even when there has been a power outage, hardware failure or other unforeseen disruption – is fundamental when it comes to maximizing energy efficiency and uptime.
The remit of the Data Centre Manager is to balance the upward trend in energy costs, stringent legislative requirements and environmental policy, alongside the proliferation of big data – all whist generating efficiency gains and minimizing costly downtime. Whether associated with productivity losses, revenue losses, longer term customer attrition, system recovery costs or the longer term impact of reputational damage, the total cost of downtime can be crippling – and is simply not an option in today’s hard working electrical infrastructures.
Cause and effect
When it comes to voltage quality problems and associated downtime, every situation is unique. A typical server system can experience more than 125 events each month – 88% of which are attributed to surges and transient events. Black-outs caused by accidental events, short-circuits, switching on of heavy loads and overloads – as well as weather events – can all negatively impact trading and revenues, resulting in the loss of data and hardware damage or disk crashes.
Impurities will also take their toll – the culprit of data corruption and wear to electronic parts - sometimes the cause of irreparable component failure. Attributed to a range of factors including spikes, lightning, surges, harmonics and noise, the resilience of Data Centres against both impurities and black-outs is brought into sharp relief when the total potential cost of downtime is considered.
But how achievable is the desired 99.999% availability? How can Data Centres – of all shapes and sizes – mitigate against threats to their resilience and achieve a continuous and high quality power supply?
How achievable is Can you be sure of 99.999% availability?
Configurable redundancy, no single point of failure, devices designed for superior robustness, anomaly detection, rapid repair time and maintenance based on hot-swap modules … the delivery of a reliable, safe, high quality power supply requires an optimized combination of vital factors, all of which are key improving resilience.
Furthermore, it is increasingly important to consider the complete economic model when specifying equipment and system upgrades, treating investment as a strategic asset rather than a short-term cost burden.
Resilience through modularity
The flexibility of a modular architecture enables an organization to adapt – rapidly – to ever changing requirements.
Rightsizing through modularity in design enables the power protection capacity to be added - when it’s needed - to meet actual or existing demand, instead of total upfront deployment. This approach means that capacity wasted is minimized in the case of variance between projected future loads and actual future loads.
Furthermore, whilst redundancy provides an attractive MTBF, the rapid repair times associated with a modular configuration can reduce MTTR to a level that enables Six Nines to be achieved – 99.9999% availability. By working directly with a manufacturer, with intricate knowledge of a system, it is possible to identify and replace a defective module in under 30 minutes.
Global power management expert, Socomec, is driving a stream of innovation in this field – specifically engineered to guarantee the performance of the new electrical ecosystem. One such example is the Socomec Modulys GP2 - a 3-phase modular UPS system specifically engineered with full flexibility and fewer parts in order to simplify and optimise every step of the integration process – from sizing to installation – de-risking the entire project.
The Modulys GP2 RM version is designed for existing 19” rack integration across multiple applications. Easy to integrate and install whilst simple to manage and maintain, it provides maximum availability and power protection in a compact design – leaving space for other rack-mounted devices.
A scalable concept – for larger data centre applications
The most comprehensive UPS ranges available today include products that have been developed specifically for large scale data centres.
When performance has been verified by independent, external bodies, users can be assured that the product has been tested in real and varied data centre working conditions. Designed to provide complete and scalable systems that are easy – and safe – to extend, it’s important to look for online double conversion load protection – this means that when systems are extended or maintenance is being carried out, the intervention is safe for both operators as well as the load.
Agility, accuracy, availability
Availability – and therefore resilience – can also be optimised through a proactive approach to monitoring and therefore expediting remedial action where required, reducing the MTTR.
The status of key operating parameters can be tracked in real time, delivering a greater degree of agility and accuracy – both virtual and physical anomalies can be addressed rapidly, in turn achieving maximum uptime and reduced operating expenditure.
Fully digital, multi-circuit plug and play measurement concepts, with a common display for multi-circuit systems, can provide accurate and effective metering, measurement and monitoring of electrical power quality. Infinitely scalable, the latest systems are capable of monitoring thousands of connection points from 20A to 6300A – and will offer an accuracy of class 0.5 to IEC61557-12 from 2% to 120% of the current sensor primary rating.
Equipped for the smart facilities of today – and tomorrow – these latest product developments are connecting the world of energy with the digital revolution to help reduce installation costs and improve performance levels, securing power and making energy management simple across critical applications.
Smart and connected – the future of power monitoring has been reinvented
The digital industrial revolution is creating a new breed of electrical ecosystem. The drive towards common digital architecture is maximising the potential value of the Internet of Things - the result is unsurpassed efficiency including the benefits of more automated, centralised systems.
For smart and connected energy management, it is now possible to more precisely monitor protective devices - remotely and in real time - across the entire electrical installation – without any wiring or additional equipment.
Socomec’s Diris A40 and Diris Digiware metering and monitoring solutions guarantee the availability and safety of the electrical installation, whilst also monitoring performance, checking power quality and managing loads.
With the simplest possible integration, the Diris A40 is easy to fit within new and existing installations. Assisted configuration and error detection cuts the commissioning time by half whilst also guaranteeing the accuracy of the measurements. Furthermore, the connection to the Cloud means that data can be automatically exported for remote processing.
With three additional new technologies available with both the Diris D40 and Diris Digiware systems, unsurpassed levels of accuracy can now be achieved.
Track the status of protective devices without additional wiring
When a protective device trips it means that a process or a system has been unexpectedly shut down. This can rapidly escalate into a crisis if the load is critical to life safety or economically essential.
Monitoring the status of a protective device is traditionally done using the auxiliary contact of the circuit breaker or a fuse blown indication system. These signals are then wired back to a PLC outstation adding more hardware and manufacturing time.
Status change immediately detected
Socomec’s VirtualMonitor technology – with iTR retrofit current sensors and Digiware S Monobloc current module - is able to detect that a protective device has been opened and will alert the site team over the associated meter’s communication bus. The status change is detected immediately and an alarm can be generated and shared.
The system can even differentiate between a trip due to a fault and a manual opening or tripping of the protective device so that the site teams knows if they need to investigate further.
The Digiware S brings this technology down to the final circuit (MCB level) where it was previously impossible to monitor an auxiliary contact. VirtualMonitor marks a major step forward in metering, removing additional hardware and wiring whilst retaining or enhancing visibility of the electrical installation.
Energy quality and resilience
By fitting permanent power quality monitoring systems, it is possible to check the reliability, efficiency and safety of an organisation’s electrical system. Data is collected and analysed in order to diagnose problems, identify deterioration in performance and highlight areas of risk – as well as locating the causes of electrical disturbances.
The latest network analyser equipment will ensure that the electrical system runs continuously and at optimized rates. By measuring electrical parameters and status, analyzing the quality of energy according to class A IEC 61000-4-30, and measuring differential current – whist also providing GPS synchronization – downtime and associated production losses are minimized, efficiency is improved and running and maintenance costs are optimized.
Power hungry cooling systems
All Data Centre facilities have dynamic environments, making it difficult to manage thermal airflow. The challenge is to match the cooling delivered to a facility with the heat generated by the current IT load – all of which needs to be monitored.
Automatic transfer switches can enhance power availability and simplify the electrical architecture, ensuring standby and alternate power availability. Deployed in the most challenging applications around the world, they have been specifically engineered to be virtually maintenance-free.
By ensuring that the switching system is fully certified to BS EN 60947-6-1, and choosing a manufacturer-built solution, these fully programmable switches can be integrated into the data centre management system via serial communication. When fitted with a maintenance bypass, they can be commissioned, tested and inspected with no down-time for the mechanical loads they typically serve.
Often mistakenly overlooked in favour of circuit breakers, fuses provide a compact protection solution for low current (~32A) loads fed from a busbar or PDU with a high prospective short circuit current. As energy efficiency is improved by reducing the distance and impedance between transformer and final load, prospective short circuits can approach 100kA.
For very large switchboards, there is the added security of short circuit and heat rise testing by Tesla lab (12,000 A AC three phase and 6,000 A AC three phase and DC) – an independent laboratory specialising in testing of LV components, switchgear and switchgear assemblies.
High performance switching
One example, the Socomec’s ATyS d H, is a remote three-phase transfer switch with 3 and 4 poles and integrated dual power supply – and has been engineered for low voltage high power applications that demand high performance and rapid, reliable switching.
The open transition transfer is performed on-load, in line with IEC 60947-6-1 and GB 14048-11 standards (Class PC).
Offering high withstand short circuit current ratings of 143kA lcm (making) and 65kA for 0.1 second lcw (withstand), the performance in terms of load switching capacity is AC33iB (6xIn cos Ø 0.5) without derating.
Safe on load transfer: I-0-II
The ATyS d H includes two mechanically interlocked switches to ensure fast switching whilst providing a neutral (Off - 0) position. This ensures that the main and alternative power supplies do not overlap. The 0 position can also be used for safe maintenance of the installation, providing isolation between both sources and the load – a vitally important factor in this specific application.
Automatic (ATSE) or remotely operated (RTSE) controls
The ATyS d H is an RTSE that can be easily used in conjunction with an ATS controller – C20/30 or C40, depending on the application - in order to operate as an ATSE.
Business continuity - guaranteed
By working with the original manufacturer, it is possible to access a superior understanding of changeover technology, the product itself and its software – as well as in inspection and testing safety procedures and the integration of equipment within a customer’s unique working environment.
With the benefit of hindsight, preventive maintenance would be top of all of our agendas when it comes to Data Centre resilience. A comprehensive preventive maintenance programme will optimise operating efficiency. Mechanical, electrical and battery inspections are carried out along with environmental checks. Equipment cleaning and dust removal is undertaken, and electronics testing programmes and software updates are completed.
With a detailed maintenance report, it is possible to increase resilience and reliability with a regular maintenance programme.
To learn how you can benefit from an integrated approach to energy efficiency within Data Centres, contact Socomec’s team info.uk@socomec.com www.socomec.co.uk 01285 86 33 00
By Chris Cutler, Corporate Account Manager and Data Centre Efficiency Specialist at Riello UPS Ltd
It was long-standing publication The Economist that said it best: “data is to this century what oil was to the last one – a driver of growth and change”. The relentless rise of Industry 4.0 and the ‘Internet of Things’ have led to an explosion of interconnectivity. By 2025 it’s predicted that the average person will interact with connected devices around 4,800 times a day – that’s once every 18 seconds!
And just as there was a price to pay for our reliance on oil to fuel previous industrial revolutions, there’ll be consequences too for society’s dependency on data to drive future growth and technological advances. According to the Global e-Sustainability Initiative (GeSI), datacentres already consume over 3% of the world’s total electricity and generate 2% of our planet’s CO2 emissions – the equivalent of the entire global aviation industry.
All these extra sensor-fitted devices, apps, and gadgets will create enormous amounts of data that will require safe, reliable storage and processing. The National Grid is creaking from decades of under-investment, so it’s not simply a case of cranking up electrical capacity to satisfy this increased demand.
Data centres will have to do more with less, and with energy bills totalling up to 60% of total running costs, operators need to embrace energy efficiency for the economic as well as the environmental reasons.
We’re living in the age of hyperscale data centres. Apple is building a single $1 billion super-facility in Ireland that will eventually use 8% of the national power capacity. At that scale, any efficiency shortcomings and unnecessary waste will obviously be amplified. But whatever size or set-up, whether on-site, colocation, or cloud, no data centre can afford to ignore the need to become more efficient and reduce their power consumption.
The concept of a ‘green data centre’ is nothing new and great technological strides have already been made in recent years by many facilities to minimise their environmental footprint, thanks in no small part to initiatives such as the DCA’s own Certification Scheme, which has helped to place energy efficiency firmly on the industry’s agenda.
Significant progress has been made with cooling technologies and techniques. Necessary advances when you consider air conditioning can account for around half of a data centre’s overall power usage. But these efficiency gains on their own aren’t enough. Fortunately there are other essential parts of a data centre’s infrastructure that can produce similar energy savings.
Modular UPS – Power Protection Using Less Energy
At the heart of any data centres lies its uninterruptible power supply (UPS) system. The first line of defence against any power outages or disruption, and the ultimate insurance policy to minimise damaging downtime on those inevitable occasions where disaster strikes.
Up until recently, a data centre’s UPS units were very much part of its energy consumption problem. Typically large standalone towers, these critical power protection systems relied on older technology that could only achieve optimum efficiency when carrying heavy loads of 80-90%. There was a tendency to oversize such fixed-capacity units during initial installation to provide the required redundancy, so systems regularly ran at lower loads than was ideal – an inefficient process wasting significant amounts of energy.
These sizeable towers also pumped out plenty of heat from their fans and electrical components, so needed significant levels of energy-intensive air conditioning to keep them cool enough to operate safely.
However, technology has moved on rapidly in recent years to the point where a UPS can now actually be part of a data centre’s energy efficiency solution. Modular UPS systems – which replace the older standalone units with compact individual rack-mount style power modules paralleled together to provide capacity and redundancy – deliver performance efficiency, scalability, and ‘smart’ interconnectivity far beyond the capabilities of their predecessors.
The modular approach ensures capacity corresponds closely to the data centre’s load requirements, removing the risk of oversizing and the initial installation costs, while reducing day-to-day power consumption. This leads to the double benefit of cutting both energy bills and the site’s carbon footprint. It also gives facilities managers the flexibility to add extra power modules in as and when the need arises, offering the in-built scalability to ‘pay as you grow’.
Transformerless modular UPS units generate far less heat than static, transformer-based versions and need significantly less air conditioning too. They are also smaller and lighter, so have a substantially reduced footprint, and are easier to maintain because each individual module is ‘hot swappable’ so can be replaced as and when required without the whole system having to go offline.
Another benefit of the move to modular is that the units easily integrate with Energy Management Systems (EMS) or Data Centre Infrastructure Management (DCIM) software, transforming them into networks of ‘smart’ UPS’s that constantly collect, process, and exchange performance data such as operating temperatures, UPS output, and mains power voltage.
This information is used in real-time to help constantly optimise the system’s performance, as well as highlighting other areas where additional efficiency savings can be made. In hyperscale datacentres where UPS’s are spread across several sites in different cities or even countries, this interconnectivity combined with the ability to remotely monitor performance helps to optimise load management and minimise the amount of energy used.
Harnessing Battery Power As Renewable Energy
One final area to highlight is the potential a UPS unit, or more specifically its batteries, has as a generator of renewable energy.
Many modern UPS’s have the option to use Lithium-Ion (Li-Ion) batteries rather than the typical sealed lead acid (SLA) type. Li-Ion batteries deliver far greater power density even though they only take up approximately half the space of SLA. This enables them to store a surplus of electricity generated during off-peak periods when prices are lower. This saved power can either be used during expensive peak times, to ward off downtime in the event of an outage, or even sold back to the National Grid.
There’s already 4 GW of power stored in UPS units across the UK, and as the nation edges towards a demand side response (DSR) model, tapping into the potential of Li-Ion batteries as a renewable energy source could pay economic and environmental dividends. It will, however, need something of a shift in mindset for mission-critical organisations to think of their power protection system – which is their ultimate insurance policy – as a means to generate green energy.
Every UPS has a lifespan, with industry best-practice suggesting a system should be replaced every 7-10 years. Many of the units installed during the boom years of data centre growth earlier in the decade will soon be ready for replacement. If that’s the scenario facing your facility in the near future, you’ll be in prime position to reap the energy efficiency and cost benefits that the move to modular UPS can bring.
With more than a decade’s experience in the critical power protection industry and a proven track-record in the datacentre sector, Chris Cutler is Corporate Account Manager for Riello UPS. He has particular expertise regarding large-scale 3 phase UPS installations and datacentre UPS efficiency.
Riello UPS is a leader in the design, manufacture, installation, and maintenance of award-winning UPS and standby power systems from 400VA to 6MVA that promote uptime and minimise system downtime in datacentres and industries such as manufacturing, healthcare, transportation, education, and the emergency services.
For further information call 0800269 394, email sales@riello-ups.co.uk or visit www.riello-ups.co.uk
By Andrew Warren, Chairman of the British Energy Efficiency Federation
The National Infrastructure Committee is due to reveal details of a nationwide energy efficiency programme. The time is right to reverse the trend in government investment
This month the National Infrastructure Commission (NIC) is due to publish details, revealing just what an ambitious nationwide programme to stimulate energy efficiency looks like.
After several years of seriously declining activity on installing energy saving measures, with activity and employment down, such a potential step change is completely overdue.
For too long, there has been a failure to consider how energy efficiency might be considered as other than as a set of individual, unrelated programmes. There is an Energy Company Obligation here. An Energy Saving Obligation Scheme there. And a landlords’ minimum energy standards regulation, a Climate Change Agreement, a Home Energy Conservation Act, a Green Deal, an emissions trading scheme.
Civil servants administering one programme frequently have little or no knowledge of the workings of, let alone the possible synergies with, programmes other than their own - particularly if they are run from different departments. Relevant initiatives taken by one or more of the devolved administrations are too often met with blank stare in Whitehall.
Across the years, there have been several attempts to draw the different parts together, to make energy efficiency policy more than the sum of its (sometimes contradictory) parts. The most recent attempt to introduce coherence came under the Coalition government, with the formation of the Energy Efficiency Deployment Office. But this was scrapped as the 2015 general election was called. And the different programme administrators seem to have gone off along their own pathways.
This has coincided with the creation of the National Infrastructure Commission, charged with ensuring purposeful delivery of investment programmes. But initially, as the Conservative MP for Eddisbury, Antoinette Sandbach, so pointedly observed: “When MPs talk about infrastructure spending, one is put in mind of boys with their toys: big trains, roads, railways and power stations.”
Her “boys’ toys” dig about conventional infrastructure thinking, was made last month, as she launched the first Parliamentary debate on energy efficiency for over six years. It became the leitmotif of the entire occasion.
The reasons given for urging the government to designate energy efficiency measures as infrastructure spending were compelling.
Energy efficiency spending is a one-off cost, so it is closer to capital than revenue expenditure. By reducing energy consumption, those investments free up energy sector capacity. That reduces (or at least delays) the need for new capacity to come online. That same new capacity - in the form of generation plants, networks and energy storage - would be considered infrastructure spending by the Government, and potentially would involve a large amount of Government expenditure.
So the question Ms Sandbach – described by fellow Conservative MP James Heappey as Parliament’s leading champion for energy efficiency - poses is stark: “Why invest in the big plant if we can roll out energy efficiency measures across the country, as part of an infrastructure project? Energy efficiency measures provide a public service: they insulate consumers – literally - against the volatility of energy markets.
“Likewise, they provide health and well-being benefits, by enabling consumers to heat buildings more effectively. And they have the knock-on consequences of reducing our carbon emissions and contributing towards our overall aim of clean, green growth.”
Last October the National Infrastructure Committee published a report setting out its carbon priorities in particular. That report states categorically “the UK has old and leaky buildings - both residential and commercial. This increases the amount of energy needed to regulate their temperature…it is therefore essential that demand for energy from these buildings is reduced.”
The NIC is acknowledging that progress on improving energy efficiency in the UK had slowed. In particular, in the housing stock, “annual rates of cavity wall and loft insulation in 2013-15 were respectively 60 per cent lower and 90 per cent lower than annual rates in 2008-11.”
Their conclusions are very pointed: “There are no plans on the part of the Government to reverse this trend.”
Acknowledging all this, in the same parliamentary debate, the energy minister Clare Perry did caution: “There is no packet of money under the Chancellor’s desk marked infrastructure.”
Back in 1982 the Commons energy select committee posed the basic conundrum: why do governments refuse to compare the cost-effectiveness of investments in energy conservation measures, with the alternative of providing still further energy supplies.
There has never been a satisfactory official answer. But with overall UK energy consumption already down 16 per cent this past decade, the answer from the market place has become very obvious.
No question. The debate is now over, signaled by the preparedness of the NIC to alter the definition of which toys the boys concerned with energy policy should be playing with in future.
This surely must galvanise the government to recognise those recent declines in energy efficiency investment. And to produce policies that will seriously “reverse this trend.”
IT teams charged with migration projects shouldn’t be afraid to wring as much support and advice out of cloud service providers as possible so that they can achieve a pain-free migration and start reaping the benefits of the cloud.
By Monica Brink, EMEA Marketing Director, iland.
Cloud adoption by UK companies has now neared 90%, according to Cloud Industry Forum, and it won’t be long before all organisations are benefiting to some degree from the flexibility, efficiency and cost-savings of the cloud. Moving past the first wave of adoption we’re seeing businesses ramp up the complexity of the workloads and applications that they’re migrating to the cloud. Perhaps this is the reason that 90% is also the proportion of companies that have reported difficulties with their cloud migration projects. This is frustrating for IT teams when they’re deploying cloud solutions that are supposed to be reducing their burden and making life simpler.
With over a decade of helping customers adopt cloud services, our iland deployment teams know that performing a pain-free migration to the cloud is achievable but that preparation is crucial to project success. Progressing through the following key stages offers a better chance of running a smooth migration with minimum disruption.
1. Set your goals at the outset
Every organisation has different priorities when it comes to the cloud, and there’s no “one cloud fits all” solution. Selecting the best options for your organisation means first understanding what you want to move, how you’ll get it to the cloud and how you’ll manage it once it’s there. You also need to identify how migrating core data systems to the cloud will impact on your security and compliance programmes. Having a clear handle on these goals at the outset will enable you to properly scope your project.
2. Before you begin – assess your on-premises
Preparing for cloud migration is a valuable opportunity to take stock of your on-premises data and applications and rank them in terms of business-criticality. This helps inform both the structure you’ll want in your cloud environment and also the order in which to migrate applications.
Ask the hard questions: does this application really need to move to the cloud or can it be decommissioned? In a cloud environment where you pay for the resources you use it doesn’t make economic sense to migrate legacy applications that no longer serve their purpose.
Once you have a full inventory of your environment and its workloads, you need to flag up those specific networking requirements and physical appliances that may need special care in the cloud. This ranked inventory can then be used to calculate the required cloud resources and associated costs. Importantly, this process can also be used to classify and prioritize workloads which is invaluable in driving costs down in, for example, cloud-based disaster recovery scenarios where different workloads can be allocated different levels of protection.
3. Establish tech support during and post-migration
Many organisations take their first steps into the cloud when looking for disaster recovery solutions, enticed by the facility to replicate data continuously to a secondary location with virtually no downtime or lost data. This is fundamentally the same as a cloud migration, except that it is planned at a convenient time, rather than prompted by an extreme event. This means that once the switch is flipped, the migration should be as smooth as a DR event. However, most organisations will want to know that there is an expert on hand should anything go wrong, so 24/7 support should be factored into the equation.
4. Boost what you already have
Look at your on-premises environment and work out how to create synergies with the cloud. For example, VMware-users will find there’s much to be said for choosing a VMware-based cloud environment which is equipped with tools and templates specifically designed for smoothly transitioning initial workloads and templates. It’s an opportunity to refresh the VM environment and build out a new, clean system in the cloud. This doesn’t mean you can’t transition to a cloud that differs from your on-premises environment, but it’s a factor worth taking into consideration.
5. Migration of physical workloads
Of the 90% of businesses that reported difficulty migrating to the cloud, complexity was the most commonly cited issue, and you can bet that shifting physical systems is at the root of much of that. They are often the last vestiges of legacy IT strategies and remain because they underpin business operations. You need to determine if there is a benefit to moving them to the cloud and if so take up one of two options: virtualise the ones that can be virtualised – possibly using software options - or find a cloud provider that can support physical systems within the cloud, either on standard servers or co-located custom systems.
6. Determine the information transfer approach
The approach to transferring information to the cloud will depend on the size of the dataset. In the age of virtualisation and of relatively large network pipes, seeding can often be viewed as a costly, inefficient and error prone process. However, if datasets are sufficiently large, seeding may be the best option, with your service provider providing encrypted drives from which they’ll help you manually import data into the cloud. A more innovative approach sees seeding used to jumpstart the migration process. By seeding the cloud data centre with a point in time of your environment, you then use your standard network connection with the cloud to sync any changes before cut-over. This minimises downtime and represents the best of both worlds.
7. Check network connectivity
Your network pipe will be seeing a lot more traffic and while most organisations will find they have adequate bandwidth, it’s best to check ahead that your bandwidth will be sufficient. If your mission-critical applications demand live-streaming with zero latency you may wish to investigate direct connectivity to the cloud via VPN.
8. Consider post-migration management and support as part of the buying decision
Your migration project is complete, now you have to manage your cloud environment and get accustomed to the variation from managing on-premises applications. The power and usability of management tools should be part of the selection criteria so that you are confident you will have ongoing visibility and the facility to monitor security, costs and performance. Furthermore, support is a crucial part of your ongoing relationship with your cloud service provider and you need to select an option that gives you the support you need, when you need it, at the right price.
As more and more businesses take the plunge and move mission-critical systems to the cloud, we’ll see the skills and experience of in-house teams increase and the ability to handle complex migrations will rise in tandem. Until then, IT teams charged with migration projects shouldn’t be afraid to wring as much support and advice out of cloud service providers as possible so that they can achieve a pain-free migration and start reaping the benefits of the cloud.
Break down internal barriers to deliver highly tailored products and services.
By Colin Masson, global industry director of manufacturing solutions at Microsoft.
It’s easy to get obsessed with technology – it is the key enabler of today’s revolution – but the harder part of the transformation is cultural.
It’s about putting the customer experience and their business outcomes at the centre of everything. That means realigning engineering, manufacturing and the supply chain around delivering a world-class sales and service experience. It means new thinking about optimising customer satisfaction and loyalty metrics such as Customer Satisfaction Scores or Net Promoter Scores rather than production efficiency.
The list of technologies that can help this realignment is endless. Manufacturers should be investing in, or at least exploring, the internet of things (IoT) and industrial automation, cloud, big data, artificial intelligence (AI), robotics, 3D printing and more. Microsoft partner Columbus recently published a major industry report exploring how these emerging technologies fit into the wider digital transformation push of manufacturers, as part of a ‘Manufacturing 2020’ strategy.
But at the heart of the revolution is data. We are putting telemetry on everything, creating a data-driven culture with a single version of the truth. Fundamentally the fourth industrial revolution is being powered by the ubiquity of IoT data coming from sensors in the factory combining with data pouring in from the outside world, such as the wealth of information being generated by smart cities, smart buildings, smart offices and even connected cars. Choosing an IoT platform is a big decision; start by identifying one that can match the scale of your ambitions.
There is another convergence that is driving business transformation. Inside the firm, the digital technologies used by IT, operations and engineering are converging. By embracing the digital transformation, manufacturers are empowering employees to be more productive in modern workplaces with apps and intelligent working methods such as the use of cobots, where employees and robots co-operate “shoulder to shoulder”.
It’s also about optimising operations through smart factories and supply chain solutions powered by intelligent edge and cloud. It means the transformation of products and business models, using insights from smart connected products, advances in modelling such as “digital twins”, and more agile end-to-end business solutions.
We see manufacturers, and individual businesses within manufacturing organisations, at various stages in their journey to servitisation, transforming products into services. Some are driving more customer engagement through traditional call centres or differentiating their product through (sometimes IoT-connected) field service. Increasingly, though, we are also seeing the transition to full “product-as-a-service”, where they sell flying hours instead of jet engines; car coatings rather than paint; water savings rather than treatment plants; and cleaning services rather than cleaning chemicals.
This journey requires that they break down the silos between internal systems such as ERP, CRM, PLM, and SCM. Instead, they need to connect “things” – people, data and processes – with more agile systems of intelligence that can keep pace with the new speed of business inherent in delivering highly tailored products and services.
Manufacturers need smart factories that can make their smart products and be at the core of much more agile supply chains. They also need intelligent shop floor solutions and business apps that augment people and address the growing skills gap in manufacturing.
IoT platforms are a key enabler, yes. But we also need big data and AI on top to provide the insights that line workers and business decision-makers need. We need both intelligent cloud and intelligent edge technologies to power robots and cobots in the factory of the future.
Big data also needs big compute to accelerate the product innovation unleashed by enhanced insights into customers, enabled by the ability to iterate through digital twins of devices, product designs, supply chains, and customer usage in digital cities. Can your legacy ERP, CRM, PLM and SCM systems keep up with the new speed of business?
At the heart of this digital world, however, lies the simplicity of customer insight. Whether you’ve got a smart product that can beam back data on customer use, or you use traditional client engagement channels, it’s those insights that will differentiate your future products and services –and decide the success or failure of your digital transformation.
The recent roundtable, The Intelligent Future of Cloud, hosted by Ingram Micro Cloud and the Cloud Industry Forum, was a great opportunity for the Channel to discuss the apparently unstoppable momentum behind Cloud technology. All present agreed that Cloud is a powerful enabler for greater business innovation and agility (and, yes, digital transformation was referenced), but believe that there is still an as yet unfulfilled need for greater education throughout the technology supply chain.
With the first response to the opening question – ‘What are the biggest threats to Cloud’s adoption? being: “There aren’t any”, the roundtable attendees could have been forgiven for packing away their laptops and turning their attention to lunch! However, what this positive response emphasised is that, right now, Cloud is riding the crest of a wave, having established such compelling momentum that its potential benefits are very difficult for any company to ignore.
Yes, there’s an ongoing debate about the different Cloud flavours – Private, Public, Hybrid, Multi - and competition between the various Cloud hyperscalers, but the message to end users is clear: ignore Cloud at your peril.
As for the Channel itself, there seemed to be a general agreement that ‘there’s a long way to go’, with many VARs comfortable with Software-as-a-Service, but less advanced when it comes to infrastructure- or platform-as-a-Service offerings.
In terms of the end users, the belief is that ‘there’s nothing insurmountable to stop the Cloud steamroller’, but there are definitely hindrances. The early adopters have embraced the Cloud, but what is described as the traditional user is still wedded to their LAN-based solutions, so there’s still some work to be done here.
Additionally, there’s a very real application development issue. Obviously, legacy applications were not designed to run in the Cloud, and end users seem reluctant to invest the necessary resources to make them ‘Cloud-fit’, so the easy option is to leave them where they are. The result being what one participant described as an application ‘no man’s land’.
Addressing the issue of application modernisation is a major priority. As with so many of the current IT drivers, what’s happening in the consumer world – light, easy to design and use applications – needs to be adopted by the business world. Yes, the re-writing process might be painful, but when it comes to new application development, there’s only one route to take.
Except…all the roundtable participants agree that the skills gap is a significant threat to the pace of Cloud growth. As one individual put it: "There's a huge lack of skills and talent, so the cost of application re-writing and development is increasing, so Cloud adoption is slowing down as companies can’t afford the necessary human resources.”
Indeed, it’s not unreasonable to conclude that the only reason that hybrid Cloud/IT exists right now is the cost factor. Always allowing for the fact that there still seems to be something of a fear factor – in-house technical staff are fearful for their futures. The thought process being: If I give this away, what’s left for me to do?
Addressing all of these concerns requires someone at the top of the organisation to champion the Cloud migration process.
Even then, the route to Cloud ‘nirvana’ can be a complicated one. There’s a great need for Cloud education, helping end users overcome their fears and objections. And even with good end user training, there’s never a fool-proof approach. Several participants cited the issue of cybersecurity where, no matter how much time you spend educating, say, a hundred people as to the many IT risks out there, there will always be one individual who opens the rogue email or who can’t help but click on the link!
As for the need for education within the Channel, much work is being done, but it seems that the many young people being trained up, then ‘bounce’ around the industry. “Retaining skills is a big challenge,” explained one participant. “You train them up and then they leave. The millennials have learnt this, they can leave and get a 50 percent pay rise – and they think they’ll be CEO by the end of the year!”
Money aside, it does seem that the work environment is massively important to the younger (Channel) workforce. While the futuristic, alternative workplaces of the Web companies might be a step too far, there’s little doubt that a traditional, regimented office environment is a big negative.
Concluding the first part of the roundtable discussion, there seemed to be a consensus that the journey to the Cloud, for end users and the Channel alike, is something of a moving target. Typically, an end user organisation will realise that it is disadvantaged through a painful lack of IT agility, will manage to overcome concerns surrounding Cloud security and reliability, but might then look at the subscription costs and work out the long term cost of the Cloud and there’s just another obstacle to overcome (even though the current car ‘purchasing’ model suggests that there are many individuals comfortable with the leasing model!).
Or, risk is still seen as a big issue. There’s a recognition that the Cloud is more secure than most on-premises IT environments, but a reluctance amongst IT staff to put their jobs on the line (imagining, when something goes wrong, the MD asking: ‘You did what with our IT?!’).
And then there’s the accountancy-driven objection. Plenty of organisations have invested millions in private data centres and IT infrastructure and have no intention of abandoning these assets until they’ve been sweated for the planned number of years – ‘We’ve invested this much money and we have to get everything we can out of this’.
On the plus side, when the conversation focuses more on the business benefits of the Cloud, rather than being hung up on technology solutions, the argument is at its most compelling. As one participant summarised: “People will make Cloud decisions based on what it will bring to the business. It’s no longer just a technology story, and we, the Channel, need to talk real business benefits and innovation.”
The financial services sector is described as a very ‘active, vibrant market’ when it comes to innovation and, more widely, the manufacturing, retail and marketing industries are switched on to the cost reductions and productivity improvements available via the Cloud. “There are plenty of companies who are bringing all their advanced workloads together in the Cloud and we’re seeing some exciting stuff come out, some phenomenal innovation” is a good summary of the discussion which ended in the recognition that Channel companies need to become Digital Transformation consultants.
The objective is to take the business challenges facing an end user and to apply technology to fix these. Infrastructure, services, DevOps need to be integrated, but this is a broad topic to address.
What should go in the Cloud and when?
Having agreed that the Cloud is the only sensible choice long term, despite a variety of obstacles, and after a brief diversion to discuss how data centres need to evolve to cope with the increasing demands of Infrastructure-as-a-Service (the conclusion being that, long term it’s going to be very difficult for anyone to compete with the hyperscalers), the roundtable turned its attention to the logistics of moving to the Cloud.
The most obvious trigger point for Cloud adoption is when it’s time for an organisation to upgrade and/or refresh its existing infrastructure. As to what can go in it, is there anything that can’t go in the Cloud?!
Well, yes, there are some exceptional items, CAD is not easy and, as previously discussed, some legacy applications with, say, millions of items in multiple folders, present a significant challenge. But any organisation that continues to rely on such ‘clunky’ applications has not bought into the dynamic change in approach and environment that Cloud offers.
The list of reasons not to go into the Cloud is getting shorter and shorter. If it ain’t broke, don’t fix it, legacy apps, connectivity and bandwidth, costs and reliability, regulation and security – all can be issues, but none are in any way insurmountable. So long as due diligence is at the forefront of the Cloud migration process, end users have to make one simple decision – are they willing to let go, or not?
In terms of Cloud adoption/migration, there seems to some concern that there is no one specific standard to help guide end users through the process. The Cloud Industry Forum has a Code of Practice (endorsed by the EU), and there are various ISO standards that are of some help, but, as yet, the Cloud market is not mature enough to have developed one or more, universally recognised and adopted standard.
In the mean time, with the Channel fulfilling the much needed role of aggregating and/or adding value to the vendor Cloud offerings, it’s left to the end users to decide which approvals and standards they demand of any Cloud supplier. ISO 27001, the information security management standard, is ‘popular’, and part of a more general trend where end users seem to be not so interested in vendor approvals and certification, but more in general, industry-wide quality standards.
In simple terms, where end users are passing over a significant level of accountability outside of their direct control, they want plenty of official assurance from the Cloud provider, and not just an anecdotal ‘we know what we’re doing, we’re very good at this’!
The perceived value of vendor accreditations hasn’t disappeared entirely, but in a multi-Cloud world, there is a very real for some kind of industry standardisation independent of such specific approvals. This will help both the Channel in terms of individual resellers being able to differentiate themselves in a competitive marketplace and end users understand such differentiations.
Interestingly, there was some suggestion at the roundtable that a significant proportion of customers (although still a minority) are becoming more educated as to the Cloud’s potential than the Channel. The approach from these tech-savvy customers is along the lines of: ‘I know what I want, but can you deliver it? If you can, then let me hand the responsibility over to you and you can do it for me and, if there’s a problem, you need to sort it asap’!
This trend also manifests itself in terms of the demand for more industry-specific Cloud solutions and the corresponding need for Channel organisations to decide how and where they want to play in the Cloud market.
The ‘get big, get niche or get out’ mantra seems to be crucial advice to the Channel. Bearing in mind that the resources (infrastructure, personnel, finance) required to get big are out of reach for most, and there are already some fairly major players in this space; the get niche opportunity is the one that holds most promise for the vast majority of Channel companies. And this could be the decision to target a specific industry sector and/or to focus on one or more specific key technology solutions – say, becoming a machine learning expert on one of the major Cloud platforms, with particular reference to IoT.
VARs are at something of a crossroads. Although, the reality is that there is only one viable road to take. Trying to remain a ‘box shifter’ has slim prospects. Trying to take on the major players is equally unattractive. Which leaves the migration route – becoming as much a systems integrator as a value added reseller. Clearly, such a decision should not be undertaken lightly – there needs to be a detailed evaluation of which Cloud, value-add opportunities hold the most potential for any specific Channel business. Furthermore, there needs to be a very real understanding that, so quickly does the Cloud market move, the originally decided-upon opportunity could well disappear over time. This means that the Channel has always to be on the look-out for the next development, and have a good knowledge of how the overall Cloud market is continually evolving.
The alternative, as one participant was open enough to admit, is that you end up with your customers knowing more about the new ideas and technologies available and ‘we can’t support them’.
Just as end user organisations are adapting their structures to best leverage the potential of digital transformation, the Channel has to do likewise – the traditional sales approach no longer works. At the most basic level, the idea of selling some new servers or storage to an IT professional at an end users organisation is, in many cases, no longer valid. Employees from, potentially, any department within this organisation are being empowered to acquire the solutions they require to help them perform their various business functions.
Moving back up the supply chain, vendors have to make a similar adjustment to the way that they engage with both the Channel and, where relevant, in direct end user contact. Increasingly, it’s no longer good enough to tell the VARs that the new server has the latest x, and the fastest y. What’s needed is an approach that highlights the business benefits of the new kit for the end user.
Only when the vendors, the distributors and the VARs work together – feeding down and back up the supply chain, will the ultimate customer experience be developed. The vendor can tell the Channel the potential benefits of a new product, the Channel, closer to the customer, can then decide which of these new features probably will matter to the customer, and the customers can provide feedback to the VARs, who can then go back to the vendors with recommendations to modify and enhance the technology products and services.
The starting point should always be: Customer A has this business need, how can technology help to solve it?
Just in case there’s anyone who is under the misapprehension that digital transformation, with its underlying Cloud and managed services foundation, is little more than a storm in a teacup, all the roundtable participants were in agreement that digitalisation of the UK economy is vital. China, the US, the Far East, mainland Europe – no national or regional economy worthy of the name is doing anything other than digitising at scale, and fast.
Getting the message across
With digitalisation a given, perhaps surprisingly, the Channel faces a major challenge in terms of educating businesses as to the benefits of Cloud. And while the role of vendor/Channel to customer education shouldn’t be underestimated, it’s clear that peer to peer information sharing is the best possible advertisement as to the power of the Cloud. As one participant explained: “Vendor to customer is a hard challenge; customers seeing other customers testify to Cloud success – this is the most effective approach – customer case studies are very compelling.” Channel companies who can leverage Cloud customers to ‘show and tell’ to others are in an enviable position.
Turn it around, and end users are asking: “Who do I trust most to give me the right insight into a particular technology? Show me the proof as to how a customer has made a change and how they’ve benefitted and then I can trust what you are saying.”
Social media – reviews and feedback – more familiarly used in the consumer world, is also becoming a factor to consider when trying to reach potential customers. As is the online world, where, increasingly, customers are going out and doing their own research when it comes to new technologies and new business solutions. One participant put forward the results of a recent survey which found that almost 70 percent of the buying process is carried out before a customer contacts a potential supplier.
Whether it’s a specific trigger point, as with the current ‘obsession’ with GDPR, or a more general recognition that some kind of a business transformation process needs to be put in place, the Channel faces a challenge to reach potential customers. Digital marketing is a key tool, with several participants emphasising that it’s all too easy to tell the difference between a ‘born in the Cloud’ Channel website and a traditional Channel website. The importance of a credible Cloud web presence is still being underestimated. And blogs and tweets are also a good way for end users to understand which potential suppliers are ‘speaking their language’ and which are not.
(One further example of the need for a Channel re-think, not specifically referenced at the roundtable, but worth sharing nonetheless, is the need to recognise the continuing evolution of the purchasing supply chain. For example, trying to sell storage at a storage event might seem the most obvious thing to do. However, more and more Cloud storage solutions in particular are being specified and purchase by other stakeholders – the application owner, for example, who would never dream of attending a storage event.
Resilient, agile, competitiveness – key characteristics of a good Cloud solution, and key characteristics of the Channel organisation selling them.
Business benefits – blockchain
The current buzz is a great example of how the IT supply chain has evolved. Once upon a time, vendors pushed a new technology, the Channel sold it, and end users knew they had to have it, unquestioningly. Now, vendors are still developing new technologies, but end users want to know what it can do for their business before the Channel can hope to sell anything.
Enter blockchain. Everyone has heard of it, everyone ‘kind of’ knows that it has something to do with bitcoin and gaming, but very few people understand exactly what it is and why it will, or will not, be relevant to the mainstream business world.
At this stage, if vendors hope to sell blockchain solutions to end users, they need to engage with the Channel, to educate the distributors, VARs and SIs. Once they have understood it, then they can start explaining blockchain to the end users, who are looking for trustworthy guidance.
Right now, the finance sector is looking at blockchain very closely. Essentially a trusted ledger, blockchain is being embraced by the banks, who recognise it as an opportunity. Where the finance sector leads, others tend to follow. Imagine the potential in the public sector – how the integrity of everything from driving licences to house purchases can be guaranteed by blockchain, or how stock management could be transformed in the retail sector.
Blockchain is a major Channel opportunity, and becoming a specialist provider of, say, blockchain as a managed service, has huge potential.
Of course, as with any new technology, there’s always the possibility that the actual might never reach the hype potential, but as a Channel company, armed with vendor advice, the best guess is that blockchain will matter, and matter big time!
Second guessing the ideas and technologies that will succeed as opposed to those that fade away has always been a part of the Channel’s role. The difference now is that, where once a VAR’s relationship with a customer all but ended when the hardware/software contract was fulfilled (product delivered, money paid), now the VAR has the opportunity (or threat!) to remain close to the customer as Cloud and managed services are ongoing.
Of course, that’s a slight exaggeration, as even a sustainable sales model required some degree of relationship building between reseller and customer. However, it’s no exaggeration to say that, right now, the stakes have never been so high in terms of ensuring that a digital transformation process succeeds as quickly and seamlessly as possible. Hardly a day passes without some IT horror story – whether it be a migration gone badly wrong or a serious data breach – or another well-recognised brand going out of business – primarily because it failed to recognise how its market was changing (invariably via digitalisation).
The roundtable ended with one final, sobering transformation discussion. The traditional Channel sales model is dead. Yes, sales need to take place, but the Channel needs to be focused on helping its customers improve their business outcomes, with the sale taking place almost ‘coincidentally’. The major problem is that traditional Channel sales folk operate on a CAPEX, commission paid up front basis, when their customers are now moving towards an OPEX, subscription model.
Clearly, developing a new IT Channel sales remuneration strategy is not beyond the wit of humans – after all, the finance sector (yes, the finance sector again!), for example, sells life insurance, which is paid monthly for years, and manages to reward its sales force in a sensible manner. However, getting CAPEX sales folk to sell subscription services needs a major ‘re-education’ process so that individuals can learn to think differently. No longer is the relevant question: “How much have you sold today?” so much as “How much of what you’ve sold today will still be bringing in revenue in two or three years’ time?”
In summary, the Cloud proposition demands that the leadership from top to bottom of the IT supply chain recognises the need for change and implements the necessary educational process(es) so that new entrants and existing players – no matter what level and right across this ecosystem understand their roles in this new, digital world.
Apay Obang-Oyway
Director of Cloud & Software, UK&I, Ingram Micro Cloud
Apay is responsible for driving transformation and change in the channel, not just as a cloud evangelist within the community, but also through his role at Ingram Micro Cloud. Apay has worked in the IT channel for almost 18 years and is dedicated to educating the market on the cloud opportunity for the channel.
Alex Hilton
Chief Executive, Cloud Industry Forum
With over 25 years of experience in the software industry, Alex runs the UK based trade bodies Cloud Industry Forum and FAST. As an industry advocate, Alex passionately believes in cloud services enabling businesses of all sizes to work dynamically and use technology to transform in to agile competitors.
Ross Daykin
Northern European Cloud Technical Consultant, Ingram Micro Cloud
Working in the industry for 17 years, Ross is a master cloud service provider focused on creating total Cloud based solutions around Office 365, Azure IaaS and DR through multiple vendors. Ross has a reseller background and is dedicated to Infrastructure Designing on Premise and in the Cloud.
Deborah Sweeney
Business Development Manager, Ingram Micro Cloud
A highly-experienced sales professional with a proven track record in business development, Deborah has been part of the team at Ingram Micro Cloud for over a year. Deborah has been working in Business Development over 6 years, working with companies such as Modern Networks, Cobweb Solutions and Futronics.
Karen McNulty
Head of Cloud Marketing, Ingram Micro Cloud
Karen leads the Cloud marketing team in the UK, providing strategic go to market direction with a focus on business insights to drive business results. Karen is responsible for driving the commercial readiness and execution of Ingram Micro’s Cloud business marketing strategy to accelerate growth.
Issam Bhathal
Microsoft Business Manager, Ingram Micro
Issam specialises in different cloud solutions such as Office 365, Dropbox and Azure. In his role, Issam helps clients by educating them around different cloud services and also helps them to build a packaged offering for their respective customers in different vertical markets.
Simon Day
Professional Services Director, Comms-care
Simon joined Comms-care following the acquisitions of Platform Consultancy Services in June 2014. Simon has a wealth of experience spanning over 25 years and has previously held a variety of diverse roles ranging from Technical Manager, Operations Director, Corporate Account Director before joining the board of Comms-care as Professional Services Director.
Paul Stephens
Practice Lead – Enterprise Platform and Workplace, Comms-care
Paul is a security cleared principal consultant with ten years of experience specialising in Microsoft technologies, with a particular focus on Microsoft Active Directory, Microsoft Exchange and Office 365 design and implementation.
Tom Greed
Partner Development Manager, Microsoft UK
Tom is an IT channel professional with 20 years of sales and people leadership experience. Starting in Distribution in the late 90s, to three Microsoft LAR/LSPs and now Microsoft, I specialise in Cloud adoption for SI, MSP and Distribution to CSP, while the fourth industrial revolution transforms our channel.
Ronan McCurtin
Senior Sales Director Northern Europe, Acronis
Ronan has a strong background in sales and client management in the technology sector, responsible for overseeing sales across Northern Europe to help build on Acronis’ success.
Dan Harris-Milnes
Microsoft and Cloud Manager, Highlander Computing Solutions
Dan has been a part of the Highlander team of almost 8 years. His role as Microsoft and Cloud Manager involves him enabling businesses to understand and embrace technology change in the modern workplace through the adoption of Microsoft and Cloud services.
Julian Day
ITS Tools and Operations Manager, Ricoh Europe
An Innovative IT Tools and Operations Manager with 25 years’ experience of EU and Global customers in both Pre & Post Sales and Technical Vendor selection. Julian can offer a diverse mix of business and technical skills to facilitate customer satisfaction, business excellence and has specialities in ITIL, BCP, Disaster Recovery, Data Management & Storage, Datacentres, Server Management, Operations Management to name a few.
Kris Haynes
Enterprise Account Manager, Axess Systems Ltd
Primarily operating in the medium to large enterprise space, Kris specialises in helping businesses to utilise enterprise virtual technologies whilst allowing IT managers to increase conformity, reduce hardware costs, reduce operating costs, increase reliability and improve flexible working.
Jonathon Berg
Managing Director, Paradise Computing Ltd
Initially in sales for the then new IBM rang of personal computers in 1985, Jonathon helped establish Pitman Computer Training in Covent Garden. After a brief time with .com bubble companies, Jonathon founded Paradise Computing in Northamptonshire in 1987. By 2014, Paradise had established itself as a forerunner of the now successful cloud initiative with over 50 companies hosting Sage-based systems and supporting over 1,000 users.
With a typical lifespan of 10-15 years, reliable performance is a significant consideration during the planning and design of precision air conditioning solutions for use in data centres. Sebastian Beyer, test centre manager at STULZ, explains why knowing the actual capabilities of equipment under temperature and air humidity conditions is vital and how individual performance tests can help to achieve an energy efficient configuration.
With utility bills rising, growing pressure to reduce carbon emissions and an increased demand on power networks, data centre owners and operators are faced with a major challenge when it comes to energy consumption. Although organisations such as Google and Microsoft are leading the way in measuring and improving energy use, mainly because they consume such vast amounts and it makes economic sense, the fact is that all enterprises need to do the same, at least until such a time as we develop 100 per cent renewable energy sources.
Temperature gauge
It’s widely accepted that data centres consume almost as much energy for non-computing resources – such as cooling and power conversion – as they do in actually powering their servers. Cooling and airflow management is a continually evolving science due to the amount of equipment variations possible and the number of options in terms of data centre design and operation.
As a result, air conditioning solutions must be planned and implemented meticulously, and for large-scale projects, in particular, it is not just the investment cost of cooling solutions that have to be considered but the operational expenditure associated with them. In addition, planners and operators also face the question of how to achieve energy savings by dimensioning their systems appropriately.
Legislation is increasingly putting pressure on data centre owners and operators all over the world and this will only increase. Germany, in particular, is leading the way in this area and its Energy Saving Ordinance (EnEV) legally obliges operators of air conditioning solutions to subject all systems over 12kW to an energy inspection on initial installation, on the replacement of important components, or every 10 years. This type of legislation is expected to become more widespread across the globe in years to come, as countries try to reduce energy usage and lower CO2 emissions. Therefore, the challenge is to implement high performance, efficient and future proof dimensioning for these systems in data centres.
Need to know
Although manufacturers of air conditioning units ascertain the technical specifications of their equipment in accordance with DIN EN 14511 by testing to determine total cooling output and energy efficiency, in practice there can be considerable differences in cooling capacity. This is due, for the most part, to different environmental influences, which cannot be taken into consideration in the standard performance test.
Data centres around the world have varying requirements for their air conditioning units. Therefore, a system's performance is determined not just by the quality of the individual components, but also by its location. Ambient conditions, such as the temperature and humidity of the return air at the unit intake or of the supply air at the unit outlet, have a significant influence on the performance of the system as a whole. In practice, changes to the equipment's temperature and air humidity parameters may have a negative influence on actual cooling capacity and efficiency. This affects the operating points of vital components such as pumps, fans and compressors.
If specialist planners and operators rely on the theoretical data provided by manufacturers, they run an increased risk of reduced capacity for cooling their data centres during future operation. The result can be incalculable additional expense, as electricity costs spiral out of control and upgrades or conversions become necessary. As well as jeopardising cost efficiency, badly planned precision air conditioning can also be detrimental to the ability of a data centre to operate efficiently in the future and meet its Power Usage Effectiveness (PUE) targets.
Knowledge is power
There has been a recognised problem, whereby standardised air conditioning systems have not performed to the levels expected once in-situ. To address this, some manufacturers now offer customers simulations and performance tests under realistic operating conditions.
This allows specialist planners and operators to gain essential data on actual performance and efficiency ratings during the planning stages of large air conditioning systems. In the UK – due to strict rules governing the accuracy of the stated performance data – tests of this kind are now standard procedure among manufacturers, with end users increasingly demanding field based information on cooling capacity and energy efficiency, so that they can remain economically competitive. For this reason, manufacturers now also offer individualised customer tests.
Leading the way
STULZ built its Test Centre for the internal testing of prototypes in the design and development phase as well for customers, data centre planners and operators to test their facilities.
With an area of over 700m², it features four conditioning systems, where airflow rates from 500m³/h to 55000m³/h can be achieved. It has two separate climatic chambers, in which air conditioning systems can be put through technical tests either individually or connected via both chambers, as shown in Figure 1. Here, different operating parameters, such as environmental influences (-20°C to +50°C) and return air conditions, can be set precisely to match a customer’s requirements, thereby simulating realistic operating conditions. While this is taking place engineers in the control room record and document test data in real time, and analyse it if necessary.
Specialist planners and operators can use the information gathered from these tests as verification of cooling capacity, efficiency and power, creating an important aid to decision making during the specification of precision air conditioning solutions.
Parts of the process
Technical tests of air conditioning units in accordance with DIN EN 14511 (performance of air conditioners, liquid chilling packages and heat pumps), EN 1216 (heat exchangers and forced circulation air cooling and air heating coils), and ISO 9614 (sound power levels), can be completed.
Using the air enthalpy method, system performance is ascertained by measuring the airflow rate and the associated intake and outlet conditions of the air. The calorimetric method, on the other hand, is particularly suitable for simulating partial load conditions during full load tests. Here, three important scenarios are used – conditioning mode, simulation of data centre cooling with supplementary cold or hot aisle enclosure, and environmental simulation mode that tests entire air conditioning systems with indoor and outdoor units. As all scenarios permit the variable setting of heat and air volumes, air humidity and return air temperature, a customer's specific local requirements can be simulated with great precision. Let’s look at each one in more detail.
• Scenario 1: Conditioning mode
Conditioning mode is a common standard test that simulates conventional closed circuit air conditioning, with or without a raised floor. The almost unlimited choice of operating conditions enables the performance and energy efficiency of the test object to be measured for a great variety of applications, as shown in Figure 2. The test records and documents all the important performance data and measured values of the air conditioning system.
• Scenario 2: Simulation of a side cooler system with cold aisle enclosure
The second test scenario simulates the row cooling equipment with a cold aisle enclosure that is commonly used today in small and medium sized data centres. By separating the cold supply air and hot server air with partition walls, this method effectively prevents chaotic air conduction and ensures that airflows at different temperatures cannot mix. This means that the required cooling capacity and, consequently, the energy consumption are considerably reduced, as shown in Figure 3.A further benefit of separation is that the return air temperature can be controlled, so that the air conditioning units can be kept at an ideal operating temperature from an energy efficiency perspective – ASHRAE recommends a server inlet temperature of up to 27°C. This temperature enables the especially efficient operation of cooling system components, such as compressors, while simultaneously protecting the sensitive IT equipment.
• Scenario 3: Testing entire air conditioning systems with indoor and outdoor units
If a facility has two separate climatic chambers, it is possible for entire cooling systems to be tested in combination, along with their indoor and outdoor units. In the first climatic chamber, conditioning systems generate the desired heat load, which equates to the data centre's expected IT load, and the indoor unit under test then cools this air down. The second climatic chamber simulates the country's specific environmental conditions, in order to reflect heat removal via air cooled condensers, air cooled heat exchangers or chillers.
Precision engineering
These individual test scenarios enable precisely the right dimensioning of air conditioning systems in data centres, under consideration of local environmental influences. In this way, users have full cost control as regards investment sums and expected future running and energy costs. The use of test centres offers a further advantage, however, as the extensive accompanying documentation can be used not only to verify the performance of air conditioning systems, but also as a basis for configuring the data centre cooling. It is therefore worthwhile for all customers of air conditioning solutions to take up services of this kind that are offered by manufacturers.
Back in the early 00s, when Google was beginning to expand its portfolio of services beyond search, it encountered a combination of challenges. Some of these emerged from familiar, classic disconnects between developers and operations folks, or IT services and line-of-business owners. Others were brand new, never-before-seen failure modes that arose from providing services on novel cloud platforms, and doing so at planetary scales.
By John Jainschigg, content strategy lead at Opsview.
To confront these challenges, Google began evolving a discipline called Site Reliability Engineering, about which they published a very useful and fascinating book in 2016. SRE and DevOps (at least the contemporary version of DevOps that’s expanded into a vision for how IT operations should work in the era of cloud) share a lot of conceptual and an increasing amount of practical DNA; particularly true since cloud software and tooling have now evolved to enable ambitious folks to begin emulating parts of Google’s infrastructure using open source software like Kubernetes. Google has used the statement “class SRE implements DevOps” to title a new (and growing) video playlist by Liz Fong-Jones and Seth Vargo of Google Cloud Platform, showing how and where these disciplines connect, and nudging DevOps to consider some key SRE insights.
First, some basic principles:
Failure is normal - Achieving 100% uptime for a service is impossible, expensive, or pointless (e.g., given the existence of masking error rates among your service’s dependencies).
Agree on SLIs and SLOs across your organization - Since failure is normal, you need to agree across your entire organization what availability means; what specific metrics are relevant in determining availability (called SLIs, or Service Level Indicators); and what acceptable availability looks like, numerically, in terms of these metrics (called the SLO, or Service Level Objective).
Use agreed-upon SLOs to calculate an “error budget” - SLO is used to define what SREs call the “error budget” which is a numeric line in the sand (e.g., minutes of service downtime acceptable per month). The error budget is used to encourage collective ownership of service availability and blamelessly resolve disputes about balancing risk and stability. For example, if programmers are releasing risky new features too frequently, compromising availability, this will deplete the error budget. SREs can point to the at-risk error budget, and argue for halting releases and refocusing coders on efforts to improve system resilience.
This approach lets the organization as a whole balance speed/risk with stability effectively. Paying attention to this economy encourages investment in strategies that accelerate the business while minimizing risk: writing error- and chaos-tolerant apps, automating away pointless toil, advancing by means of small changes, and evaluating ‘canary’ deployments before proceeding with full releases.
Monitoring systems are key to making this whole, elegant tranche of DevOps/SRE discipline work. It’s important to note (because remember: Google isn’t running your datacenter) that this has nothing to do with what kind of technologies you’re monitoring, with the processes you’re wrangling, or with the specific techniques you might apply to stay above your SLOs. In short, it makes just as much sense to apply SRE metrics discipline to conventional enterprise systems as it does to twelve-factor apps running on container orchestration.
So here are a few things Google SRE can tell you about monitoring, specifically:
Alert only on failure, or on incipient failure - Alert exhaustion is a real thing, and “paging a human is an expensive use of an employee’s time.”
Monitoring is a significant engineering endeavor - Google SRE teams with a dozen or so members typically employ one or two monitoring specialists. But they don’t busy these experts by having them stare at realtime charts and graphs to spot problems: that’s a kind of work SREs call ‘toil’ -- they think it’s ineffective and they know it doesn’t scale.
Post-hoc analysis, no magic - Google SREs like simple, fast monitoring systems that help them quickly figure out why problems occurred, after they occurred. They don’t trust magic solutions that try to automate root-cause analysis, and they try to keep alerting rules in general as simple as possible, without complex dependency hierarchies, except for (rare) parts of their systems that are in very stable, unambiguous states (their example of ‘stable’ -- when they’ve redirected end-user traffic away from a downed datacenter, systems can stop reporting on that datacenter’s latency). Elsewhere, their systems are in constant flux, which causes complex rule-sets to produce excessive alerts. One exception to this general rule about simplicity: Google SREs do build alerts that react to anomalous patterns in end-user request rates, since these affect usability and/or reflect external dependency failures (e.g., carrier failures).
Heavy use of “white box” monitoring - Google likes to perform deeply introspective monitoring of target systems grouped by application (called Business Service Monitoring in Opsview Monitor). Viewing related metrics from all systems (e.g., databases, web servers) supporting an application lets them identify root causes with less ambiguity (e.g., is the database really slow, or is there a problem on the network link between the DB and the web host?)
Four golden signals - Because part of the point of monitoring is communication, Google SREs strongly favor building SLOs (and SLAs) on small groups of related, easily-understood SLI metrics. As has been widely discussed, they believe that measuring “four golden signals” -- latency, traffic/demand, errors, and saturation -- can pinpoint most problems, even in complex systems such as carrier orchestrators with limited workload visibility. It’s important to note, however, that this austere schematic doesn’t automatically confer simplicity, as some monitoring makers have suggested. Google notes that ‘errors’ are intrinsically hugely diverse, and range from easy to almost impossible to trap; and they note that ‘saturation’ often depends on monitoring constrained resources (e.g., CPU capacity, RAM, etc.) and carefully testing hypotheses about the levels at which utilization becomes problematic.
The bottom line is that good DevOps monitoring systems need to be more than do-it-yourself toolkits. Though flexibility and configurability are important, moreso is the ability of a mature monitoring solution to offer distilled operational intelligence about specific systems and services under observation, along with the ability to group and visualize these systems collectively, as business services.
Take a New Look at 3-Phase Power Distribution.
Alternating phase outlets alternate the phased power on a per-outlet basis instead of a per-branch basis. This allows for shorter cords, quicker installation and easier load balancing for 3-phase rack PDUs. Shorter cords mean less mass, making them less likely to come unplugged during transport of the assembled rack.
Sustainability is moving from being a secondary, or even tertiary, concern for IT leaders to a primary issue. The reality is that power and electronics waste issues are growing exponentially as technology demands rise. Green IT strategies are becoming a priority, with this problem facing companies in just about every sector.
By Simon Bitton, Director of Marketing, Europe, Park Place Technologies.
This article will look at the challenges of dealing with IT waste, particularly because computer, server, storage and network components can feature a combination of hazardous chemicals, materials that require special disposal methods, and precious metals that need to be recycled. The significance of recycling and how the IT hardware industry is making progress in developing recycling technologies will also be explored, as will the importance of a third-party hardware maintenance plan, which can enable IT managers to deal with waste more effectively through supplementary services that support equipment disposal.
Toxic Components of IT Hardware
Information and Communications Technology is a major contributor to toxic waste worldwide. Although electronic waste makes up just 2% of landfills, it represents 70% of the toxic waste there. Much of this garbage is generated by consumers, chucking out their iPhones, televisions, and other devices, but businesses have a critical role to play in greening their own asset disposal operations.
Hardware common to the data centre often includes several hundred materials, many of them dangerous, carcinogenic, and/or harmful to the environment. Lead, for example, has long been a central concern. Old cathode-ray monitors could contain up to eight pounds of the material associated with nervous, blood, and reproductive system problems. Newer technologies, such as LCD screens, still use lead, albeit in smaller quantities.
Mercury is also highly toxic—to such a degree, some countries have proposed banning its use altogether —but for now, it’s in circuit boards, switches, relays, and certain components for LCD displays. Exposure risks include irreversible neurological damage, cardiovascular problems, liver and renal disease, and other serious issues. What’s more mercury builds up in the body and the environment, so the mercury in a server today could easily end up in the tuna you eat years later.
Lesser known substances include chromium, often incorporated in metal housings for corrosion resistance. It can cause severe respiratory problems and cancers and even be absorbed by the skin. Cadmium, found in chip resistors, cabling, and monitor coatings, bioaccumulates with toxic effect on the kidneys and bones.
Brominated flame retardants are also common in IT hardware, including circuit boards and plastic casings. These substances do not break down easily in the environment. Human exposure is linked to impaired memory and learning, as well as interference with thyroid and estrogen hormone systems.
Polyvinyl chloride (PVC), also in plastic coatings and cables, can leach cancer-causing phthalates, which are associated with kidney and liver damage. Lead-based stabilizers are used frequently in PVC wiring. If burned, PVC releases chlorinated dioxins and furans, which can be toxic in low concentrations and, unfortunately, persist in the environment.
Some UPS devices still use lead acid batteries, similar to car batteries, which contain sulfuric acid. Batteries in general are ripe with heavy metals, including mercury, nickel, and cadmium, as well as silver oxide, mercury oxide, and zinc carbon. Even alkaline batteries may include hazardous waste.
Recycle, recycle, recycle
So why recycle? Without proper disposal, these and other substances can leach into soil, contaminate water supplies, evaporate in the air, and infect the food chain.
What’s more, lack of recycling is a lost opportunity. Retrieval of material from used equipment means less need for mining and plastics manufacturing, which, in turn, means lower greenhouse gas emissions and less pollution. In fact, one metric ton of circuit boards can contain up to 800 times the amount of gold and 30 to 40 times the amount of copper mined from one metric ton of ore, according to the EPA. There are energy advantages as well. Recycling one million laptops, those basic business necessities, would save enough energy to power 3,657 U.S. homes for a year.
Fortunately, the IT hardware industry is getting better at building for recyclability, with Dell-EMC among the leaders in tackling the challenge of creating products that can more easily be dismantled into reusable and recyclable components. Companies should buy with recycling in mind, reviewing OEM policies and moving their purchasing toward vendors with a similar commitment.
Reclamation companies are advancing methods to achieve 100% recyclability of components. Recycling processes often include manual or, in some cases, automated disassembly of electronics to remove reusable components, separate out any batteries, and prepare the hardware for industrial shredding. From there, magnetic removal can retrieve steel and other precious metals, and a variety of other tools, such as optical sorting systems, can separate plastics, aluminium alloys, and other materials.
Third-Party Maintenance Advantages
Data centre managers are looking to reduce server waste and overall energy consumption, reuse IT hardware for as long as possible, and finally recycle truly obsolete systems in an Earth-friendly manner.
As important as proper decommissioning and recycling of hardware is for data security, human health, and the environment, it can be difficult for businesses in general to manage. This is where third party maintenance comes in. Companies like Park Place Technologies can take on the tasks, defining and implementing a process for secure disposal. These supplementary services are particularly valuable for storage systems because companies must not only consider environmental factors when casting aside solutions, they must also enact data protection strategies to ensure information is inaccessible when hard disks are thrown away.
A non-core business function for most enterprises, asset disposal is an excellent candidate for outsourcing. What’s more, an experienced partner can offer expertise in:
Consumers are becoming more environmentally conscious and want the companies they do business with to take reasonable steps toward greener operations. Responsible IT asset disposal can greatly contribute to the sustainability initiatives at any enterprise.
Schneider Electric critical infrastructure plays a vital role in the smooth running of the train leasing company’s on-premise, business critical data centre.
Angel Trains is one of Britain’s leading train leasing companies, providing rolling stock to several of the UK’s largest Train Operating Companies (TOC’s) including Virgin Trains, Abellio Greater Anglia, Arriva Rail North, Great Western Railway and South Western Trains. Established in 1994 as part of the process to privatise British Rail, the company now employs approximately 150 people split between its primary headquarters in Victoria, central London and secondary premises in Derby.
The company owns some 4,000 plus rail vehicles which it leases to operators, on terms that are generally coterminous with franchises granted by the Department of Transport. “We’re a big-ticket asset leasing company,” says Andy Wren, Head of IT Services, “we have an intricate business model, and our IT systems that support it are similarly complex.”
Angel Trains’ IT department comprises eight people, including Andy Wren, and is based at the London HQ where the corporate data centre is located. Andy is responsible for leading the entire IT Services function including application development, software procurement, support for users and management of the data-centre infrastructure that underpins it all.
The key IT systems operated by Angel Trains are its asset-management system, a bespoke application developed in-house that manages the company’s inventory of rolling stock, and Oracle Business Suite which comprises the financial stack of software including accounts receivable, general ledger and invoice-management.
“Together those two applications are the most business-critical, being responsible for managing our revenue generation and collection, on which the business depends on,” says Andy Wren. “However, we also run several other Microsoft server-based applications such as Sharepoint for our content and document management.”
Agility is key
The key priorities for the data centre, in which the applications are hosted, are agility, reliability and cost-effectiveness. Although the company tries to standardise leasing contracts with its customers, the reality is that each agreement has an element of customisation with consequent demands on the IT department’s development effort. An implication for the data centre is that it must have the agility to scale up capacity to accommodate additional servers, should they become necessary to meet customer requirements.
For reliability, the data centre has a South London-based disaster recovery (DR) site to which all its data is replicated and securely backed up. Driving all the IT investment decisions, however, is the perennial need to keep costs low while maintaining a consitently reliable level of service. Angel Trains IT department enjoys the convenience of being able to run its own systems on-premise, but IT Management are cognisant of the fact that third-party service providers can offer hosting services from remote sites at competitive prices. The ownership, control and speed of connectivity from the on-premise solution has many benefits for the company, of which one is avoiding any latency issues, particularly with large files.
Owning versus outsource
“As an internal IT department, we are comfortable with having the ability to monitor our own infrastructure and IT equipment, rather than have a third-party managing it on our behalf at another location,” Wren says. “We investigated a number of different hosting partners, and found there is a great variety of services available, however, it was more cost-effective to own and manage our own on-premise data centre.”
Among the service options considered were simple colocation, where the servers could be hosted in a colocation site operated by a third-party company, but Angel Trains’ staff would continue to manage the systems remotely. Alternatively, some or all of the management of the systems could also be outsourced to an external service provider. However, the company decided to continue to operate its data centre in-house, with the help of a maintenance and support contract with APC by Schneider Electric Elite Partner, Comtec Power.
Resilience from start-to-finish
Angel Trains has been utilising Schneider Electric UPS systems for ten years, attracted initially by what their UPS products offered in terms of flexibility, with the ability to perform “hot swaps” of components such as batteries and power controllers.
Their data centre comprises a rack-based containment system, with critical power protection provided by Schneider Electric Symmetra PX UPS units. For additional resilience, there is a dual power feed running direct from the mains and an emergency backup power generation unit on-site.
With key challenges that included cost effectiveness, reliability and footprint, in terms of space, Angel Trains chose to adopt Schneider Electric’s ISX Pod architecture with InRow cooling for its data centre.
“Once we were introduced to Schneider Electric’s on-demand InfaStruxure™ solution with InRow cooling, we knew that was exactly the type of architecture we wanted to move forward with.” said Andy Wren. “We needed to make the new data centre as cost-effective, scalable and robust as possible and the Schneider Electric racks and Symmetra UPS systems hit the mark in terms of resilience and efficiency - whilst helping us to optimise the fairly confined space in our data centre.”
Ultra-efficient cooling is provided by a combination of external chillers and condensers located on the roof of the building, in addition to the industry-leading InRow DX systems deployed within the Pod. The facility is also managed using Schneider Electric’s Award-winning StruxureWare for Data Centers™ DCIM (Data Centre Infrastructure Management) software, part of the EcoStruxure for Data Centers™ Solution.
“As a team we knew we could run an on-premise data centre cost-effectively” he continued. “But the Schneider Electric infrastructure components we have selected have been key to that process, ensuring the capacity is utilised in the most optimum way.”
Expertise in data centre services
As well as provisioning and installing much of the infrastructure equipment in the data centre, Comtec Power continue to provide monitoring and maintenance support in collaboration with Angel Trains’ IT staff. “When first searching for a partner, Comtec engaged with us far more than other potential suppliers,” said Andy Wren. “They had a wealth of expertise and understood both our challenges and drivers, in addition to being very flexible and competitively priced.”
Ongoing support provided by Comtec includes taking responsibility for rapid response to any faults in the infrastructure equipment, such as failures in air-conditioning units, including fans, and UPS battery malfunctions.
“Our team can handle some of the smaller tasks internally,” continued Andy Wren, “but under our maintenance service agreement, Comtec can proactively monitor and react to any faults within our core data infrastructure.”
As part of a recent upgrade to the standard maintenance agreement, Angel Trains have recently connected the data centre infrastructure compnents to Schneider Electric’s EcoStruxure IT monitoring solution, which was perviously known as StruxureOn. This delivers detailed 24/7 monitoring and critical insights straight to the users’ mobile phone as well as Comtec’s engineering team.
“Through the remote diagnostics, Comtec can engage quickly to begin fixing issues whilst proactively avoiding any serious situations or downtime from developing. We chose Comtec because they have the most experience. They built the system and we are comfortable operating in partnership with them as our trusted advisors.”
“Having a strong working relationship with long term partners such as Schneider Electric and Comtec Power has provided Angel Trains with the advice, skilsets and peace of mind that is necessary to run an efficient, on-premise, business-critical Data Centre,” said Andy Wren.
In today’s digital economy, cloud is king. Enterprises continue to feel the benefit of cloud adoption which, if deployed correctly, promises greater cost savings, better performance, enriched user experience and increased business agility.
By Russell Poole, Managing Director UK at global interconnection and data centre company Equinix.
What’s more, the public cloud revolution continues to gather pace. Rightscale’s recent State of the Cloud survey reported that public cloud adoption increased to 92% in 2017 with 20% of enterprises planning to more than double public cloud spends in 2018. Yet businesses continue to be wary of its potential weaknesses and as such, many are missing out on exploiting everything cloud has to offer.
Despite years of use by some of the world’s largest companies, the primary barrier for cloud adoption is still data security. IDG[1] has reported that two in three enterprises cite security concerns as a key challenge when implementing cloud strategies. This lack of confidence was one of the reasons behind the formation of Equinix’s Privacy Office – a team of subject matter experts addressing data privacy compliance and security issues, which advise customers on how to best navigate this digital era where businesses and consumers are becoming ever more connected with data being the shared currency.
Is the current uncertainty around data privacy and security – especially across Europe with the introduction of the GDPR – a valid reason to reconsider cloud adoption altogether, or is it simply becoming a driver for enterprises to work towards a more sophisticated hybrid cloud solution? With cloud adoption topping the agenda for global businesses undergoing digital transformation, what can be done to make the cloud a safer place?
Harnessing the cloud in all its guises
Amidst this scrutiny, cloud and network service providers first and foremost need to ensure they are deploying direct connection strategies to optimise cloud migration and allay fears around the safety of the cloud environment.
Companies have become reliant on a more collaborative approach to business and so are seeking safe and secure ways to connect to the many partners, customers, employees and geographies required in today’s environment to accelerate business performance and create new opportunities.
This increased but still secure level of connectivity can be achieved via interconnection – direct, private data exchange between businesses which allows them to avoid the public internet.
Businesses need these secure, high-performance connections across multiple cloud platforms (public and private). Utilising different clouds and cloud vendors, they are connecting to and collaborating with others around them in order to gain competitive advantage. And by prioritising interconnection in cloud delivery strategies, cloud service providers can better match the interconnection-first strategies being driven by their enterprise customers and partners, allowing them to accelerate business performance.
According to a recent study published by Equinix, ‘The Global Interconnection Index’, private interconnection between enterprises and cloud and IT service providers, is expected to grow 160% annually between 2016 and 2020, while multi-cloud is increasingly being embraced by companies thanks to the flexibility it provides. Different cloud service providers tend to excel in different areas – so companies that use just one provider tend to find their needs well met in some parts, and under-served in others. But more importantly, a multi-cloud strategy negates the possibility of a catastrophic attack occurring, as cloud services do not sit with one single provider – greatly reducing the risk of service interruption.
Keeping your data safe
By using direct interconnection strategies to optimise cloud migration, enterprises will not only decrease the likelihood of a cyber breach, but also ensure sensitive data is kept safe whilst still increasing the performance of backup and recovery processes over private, high-speed, low-latency connections. Of course, this brings with it peace of mind to anyone responsible for data within a company, as any unauthorised access could lead to severe financial penalties and a loss of brand reputation. Particularly given the arrival of the much-discussed GDPR, businesses must ensure they have full knowledge and control of where their data resides and who has access to it or put their business at further risk.
One critical safeguard against cyber breaches is encryption. The process has proven to be one of the most effective data protection controls as once data is encrypted, it becomes unusable without the encryption key. Therefore, protection of these encryption keys is vital to the protection of sensitive data, leading businesses to turn to the cloud for hardware security modules (HSMs) to protect their applications and data in hybrid environments.
This increased importance of encryption is what drove us at Equinix to launch SmartKey – a cloud-independent, programmable key management and cryptography service to provide businesses with a secure and scalable end-to-end encryption strategy. SmartKey addresses performance and governance, as well as risk and compliance, across multiple cloud service providers and hybrid cloud infrastructures. The main goal is to protect an enterprise’s data wherever it resides and to keep it under the enterprise’s control, whilst locating that data and a company’s encryption keys in close by, but separate places for better security and compliance.
Interconnection in the cloud
Interconnection and cloud are inseparable in defining next-generation cloud service delivery. For some, an interconnected, multi-cloud future is already here, and for many others, it’s not far off. Direct connections boast higher performance, and by sending their data outside of the public internet, companies are making data transfers faster, more secure and more stable, leveraging the potential of the cloud without sacrificing privacy protection.
This next stage of digital transformation is only increasing enterprise demand for cloud service providers that know how to help their customers master an interconnected cloud environment. Without interconnection, it is impossible for enterprises, CSPs and NSPs to tap into the rich opportunities for growth and innovation the cloud presents. This is what will allow companies to achieve critical business and IT objectives, including an expanded global presence, increased IT capacity and scalability, and migration to new technologies and IT infrastructures. Ultimately, it’s about future-proofing the enterprise and creating an environment in which digital-first businesses can thrive – and interconnection will be at the very heart of this long-term success.
[1] IDG Data and Analytics Survey, 2016
DCS talks to Giordano Albertazzi, President of Vertiv in Europe, Middle East and Africa, about the company’s current focus in the data centre industry. Edge computing, energy efficiency, colocation, Cloud and security are amongst the topics discussed.
1. Edge computing is one of the major data centre developments at the present time. What work is Vertiv doing in this area?
Vertiv’s edge experts, in conjunction with an independent third-party consulting firm, recently conducted a global, research-based analysis of network edge use cases. The research identified the main archetypes for edge applications and the technology required to support these. It also highlighted 24 use cases considered to have the greatest impact on businesses and end users, based on projected growth, criticality and financial impact.
The research aimed to provide more clarity on key edge use cases and the implications for the design and operation of digital infrastructure. By analysing what edge really means in all its different forms – from content distribution to autonomous vehicles – Vertiv can help its customers, partners and other stakeholders accelerate and focus their edge strategies.
As interest and demand for edge computing takes shape, we will continue to develop and deliver edge-focused products. Our Vertiv SmartCabinet and SmartMod are just a couple of examples of modular “plug and play” style solutions that are intelligent, fully integrated systems ideal for edge infrastructures.
2. Specifically, recent Vertiv research identified four specific types of edge use? Can you tell us a little bit about each, starting with the data intensive applications?
Data Intensive
The Data Intensive archetype encompasses use cases where the amount of data is impractical to transfer over the network directly to the cloud – or from the cloud to the point-of-use – because of data volume, cost or bandwidth issues. Think, for example, of growing IoT networks and high-definition content delivery. In the case of the latter, in 2016 video accounted for 73 percent of all IP traffic and is expected to grow to 82 percent by 2021. Big players like Netflix are partnering with colocation providers to expand their delivery networks and bring data-intensive video streaming closer to users, as well as reduce costs and latency. With the growing demand for high-definition videos, local hubs will have to increase support for current metro hubs to reduce bandwidth costs and latency issues.
Organisations are already struggling to manage the volume of data being generated for IoT networks. Such organisations must move huge amounts of data created by devices and systems to a central location for processing.
3. And then there are the human-latency intensive applications?
The Human-Latency Sensitive archetype covers use cases where services are optimised for human consumption, with speed being the defining characteristic. Customer-experience optimisation is a good example of a use case for this archetype. Speed has a direct impact on user experience in applications. Websites which optimise for speed using local infrastructure notice a direct effect on their page views and sales. According to Google, adding a 500-millisecond delay to page response times results in a 20 percent decrease in traffic.
Another use case example within this archetype is smart retail and immersive technologies including augmented reality. Delays in delivering data directly impact a user’s technology experience – and a retailer’s sales and profitability. As such use cases continue to grow, so will the need for local data processing hubs.
4. And machine-to-machine latency sensitive applications?
Machine-to-Machine Latency Sensitive applications include use cases where services are optimised for machine-to-machine consumption. Speed is a characteristic of this archetype as it is needed for machines to process data to their full capabilities. The consequences for failing to deliver data at the required speeds can be even higher in this case than in the Human-Latency Sensitive Archetype.
An example of an application of this archetype would be latency sensitive systems used in automated financial transactions. Prices can fluctuate in milliseconds and if systems are lacking the latest required data and cannot optimise transactions, potential monetary gains can be converted into losses. According to Tabb Group, brokers can lose as much as $4 million in revenues per millisecond if electronic trading platforms are 5 milliseconds behind the competition.
Additionally, smart grid technologies fall into this archetype as the electrical distribution network needs to self-balance supply and demand and manage electricity use in a sustainable, reliable and economical way. It enables distribution networks to self-heal, optimise for cost and manage intermittent power sources, assuming the right data is available at the right time.
Other Machine-to-Machine Latency Sensitive applications include smart security systems that rely on image recognition and real-time analytics
5. And, finally, the life critical applications?
Life Critical use cases have serious implications for human safety. Autonomous vehicles and drones, for example, can be a great benefit when operating effectively, but can be positively dangerous to human health if they malfunction.
In the near future, self-driving cars will be on the road. If these kinds of systems don’t have the correct data, the consequences could be disastrous. The same is true of drones. We could easily be looking at a future where hundreds of delivery drones are flying over a city at any given time.
Increased use of technology in healthcare is also an example of a use case in the Life Critical archetype. Electronic health records, cyber medicine, personalised medicine (genome mapping) and self-monitoring devices are reshaping healthcare and generating huge volumes of data.
6. And how is Vertiv developing edge data centre solutions to address any/all of these?
We are building on this initial phase of research to define technology requirements for each archetype – and to help accelerate edge deployments and ensure local infrastructure provides the security, speed and availability a particular application requires.
This includes producing fully integrated solutions including power, thermal, security, along with management and software integration – all in a single package. These can be flexibly deployed to match data centre infrastructure economics at the network edge, so that operators can efficiently upgrade and migrate thousands of sites. What has changed most fundamentally is the ability to make the transition fast, simple and flexible for these network operators at hundreds or thousands of edge locations.
As mentioned previously, our Vertiv SmartCabinet and SmartMod are examples of solutions that support edge applications and we are continuing to develop our product portfolio to address the needs of our customers and anticipate the broader market trends.
7. Energy efficiency remains another major data centre issue – in general terms, what are the areas, technologies within this that interest Vertiv right now?
There is a limit that the traditional UK grid infrastructure can cope with and moving forward, we are certainly likely to see more innovative uses of renewables like solar, as well as energy storage being used to complement existing data centre power architectures. Legacy designed data centres are most at risk as they might not be able to face power usage in today’s connected, mobile age, due to constraints with incoming power supplies and local infrastructure.
One area of efficiency that is often overlooked is cooling equipment. In an average data centre, cooling accounts for approximately 40 percent of total energy use – a staggering percentage given the level of equipment used in the building. To ensure availability without breaking the bank, a data centre needs a thermal management solution designed to optimise cooling efficiency while lowering the total cost of operations. This requires smart, flexible technologies that can promptly adapt to changing temperatures. Such systems can automatically respond to conditions in the data centre to optimise cooling and improve system reliability while reducing operating costs by up to 50 percent. While newer cooling units often come equipped with the latest technology and features, older units can also be upgraded to achieve similar cost-saving and efficiency benefits.
8. Vertiv recently joined the Ericsson Energy Alliance – can you talk us a little bit about this?
The Ericsson Energy Alliance is a unique complementary site solution partnership to drive cost-effective and sustainable service provider network evolution towards 5G, within the Ericsson Radio Site System.
The alliance combines Vertiv’s global expertise in power, thermal and infrastructure site management solutions, and Sweden-based NorthStar’s leadership in battery and energy storage solutions, into the Ericsson Radio Site System. The partnership will help establish a competitive ecosystem and management interface, as well as help to increase the market share of the enclosure and power parts of the portfolio.
The alliance increases competitiveness and cost efficiency through the broadened portfolio and more access to new technologies. The sustainability benefits include:
Vertiv and Telefónica formed a global, long-term partnership to boost energy savings through fit-for-purpose infrastructure solutions. Under the frame agreement, Vertiv will provide Energy Savings as a Service (ESaaS) across Telefónica’s core and access sites in Europe and America, covering all facets from initial site assessment to comprehensive maintenance services.
Through this agreement, Vertiv experts will carry out energy audits and deliver wide-ranging assessment reports outlining the projected KPIs as well as ROI Energy Savings Sharing for each specific site. The reports comprise a series of recommendations for optimizing the performance, capacity, availability and efficiency of critical infrastructure, ultimately increasing energy savings. Vertiv will provide total support from consultancy to execution to 24/7 monitoring and maintenance services, requiring no capital expenditure (CAPEX) from the customer, with Vertiv fully financing the project as part of the ESaaS contract.
10. Moving on to other industry ‘hot topics’, please can you share Vertiv’s thoughts on, for example, how the colocation market seems to be developing?
In terms of the colocation market, we should most certainly be looking to the speed at which cloud adoption is happening. In many cases, cloud providers can’t keep up with capacity demands and in reality, some would rather not try. They prefer to focus on service delivery and other priorities over new data centre build and will turn to colocation providers to meet their capacity demands.
With their focus on efficiency and scalability, colos can meet demand quickly while driving costs downward. The proliferation of colocation facilities also allows cloud providers to choose colo partners in locations that match end-user demand, where they can operate as edge facilities. Colos are responding by provisioning portions of their data centres for cloud services or providing entire build-to-suit facilities.
In terms of other ‘hot topics,’ the emergence of the Gen 4 data centre is also something that is a focus area for us. Whether traditional IT closets or 1,500 square-foot micro-data centres, organisations increasingly are relying on the edge. The Gen 4 data centre holistically and harmoniously integrates edge and core, elevating these new architectures beyond simple distributed networks.
This is happening with innovative architectures delivering near real-time capacity in scalable modules that leverage optimised thermal solutions, high-density power supplies, lithium-ion batteries, and advanced power distribution units. Advanced monitoring and management technologies pull it all together, allowing hundreds or even thousands of distributed IT nodes to operate in concert to reduce latency and up-front costs, increase utilisation rates, remove complexity, and allow organisations to add network-connected IT capacity when and where they need it.
11. Specifically, how do you see colocation developing as Cloud Computing becomes more and more pervasive?
Vertiv and 451 Research’s report, ‘The Impact of Cloud and the Internet of Things on
Data Center Demand’, highlights how enterprises continue to shift IT from on-premise data centres to off-premise colocation, hosted private cloud and public cloud environments. While companies are on average retaining as much as 40 percent of their workloads in-house, and up to 36 percent of workloads in non-cloud environments, most survey respondents plan to increase their use of private and public cloud over the next two years.
For providers of leased data centre space, the continued move to public clouds will drive demand under a variety of circumstances, including when:
1. Cloud providers lease data centre space rather than build it themselves.
2. Enterprises continue to shift workloads and data that are not suitable for public cloud off-premises (e.g., to a private cloud).
3. Cloud providers and enterprises seek to install points of presence in network-dense data centres to interconnect with providers, partners and customers.
While this survey focused on enterprises rather than cloud providers, separate 451 Research has found that cloud providers outside of the top three (Amazon, Microsoft and Google) have a strong tendency to lease nearly all their datacentre space. Even the top three providers, which have built very large data centre campuses, tend to lease large amounts of data centre space from specialised providers, and this tendency seems to have increased in recent years due to strong cloud take-up by enterprises and the need for cloud providers to add global infrastructure quickly.
11. And we can’t talk about data centres without mentioning security! What’s the Vertiv take on the recently introduced GDPR?
While compliance is vital for any organisation, IT teams must be conscious not to get forced into treating the GDPR purely as a box-ticking exercise. Firms must be smart in meeting regulatory requirements while using data - and IT holds the key to this success.
The storage, protection and handling of data are today recognised as providing a critical edge in the market. Indeed, the ownership of insights that your competition doesn’t have, and the ability to mine and use data better than others, is central to the best businesses. This heightened ‘data-based’ competition is set within a landscape that IDC predicts will see us create 180 zettabytes of data in 2025. So irrespective of the GDPR, companies will be applying pressure on their teams to capitalise on data.
Business leadership, business units and employees themselves, therefore, need to understand the impact that adhering to changes in regulation will actually have on their business. Knowing which boxes need to be ticked by the organisation is not the answer in isolation; there needs to be a sustainable approach to competitive advantage that supports immediate compliance needs, but critically that can be deployed by IT.
Looked at another way, GDPR shouldn’t be the end point – but a reset in the way that businesses use and manage their data to stay competitive.
12. The original DCIM products didn’t quite deliver on the promise, do you think that the newer DCIM offerings are better equipped to deal with the challenges of managing the data centre environment?
Yes, most definitely. While DCIM is not a magical tool, it does have the capacity to give real-time insight into power, space and cooling, which helps you manage capacity, reduce risk and increase efficiency. Vertiv has evaluated the promises of DCIM and believes it can deliver real value to data centre in four key areas: thermal management, capacity planning, data centre monitoring and energy management.
When it comes to thermal management, it can be a time-consuming and costly balancing act to manage cooling capacity between IT devices and the facility if the right tools aren’t available. Our solutions mean that they can analyse and monitor real-time conditions and set exhaust temperature, among other features.
Capacity planning can also be a hefty cost and it’s vital to know exactly what you have so that you can maximise your resources. Our solutions allow for that, using things like accurate and up-to-date information and tracking device details to drill-down capability. Similarly, with data centre monitoring and energy monitoring, we can give real-time visibility into energy consumption and operating efficiencies, use elements such as our alarm notification system and understand the interdependencies among devices to plan for power changes.
IT workloads are expected to grow in the next two years, with some of this attributed to the increase in IoT. This will inevitably mean higher volumes of data being generated, likely leading to a shift away from on-premise data storage. One possible outcome is that the data will move to cloud hosting and colocation data centres as well as to network operator infrastructure and edge computing systems. As our work on defining edge archetypes makes clear, many edge applications, while relying on cloud computing for data archiving and deep learning, require distributed IT capacity that is located close to the source of data.
For too long the world of IT – and especially the major ISVs – created software by IT professionals for IT professionals.
By Steve Broadhead – Broadband-Testing.
Why weeks of training and months of installation and deployment time in order to master one company’s idea of what software should look and feel like was ever accepted is purely down to base ignorance; surely it can’t be that users actually enjoyed that experience, nor the budget holders. And when it comes to critical elements of the IT infrastructure such as service and support, that “this is the way we do it” approach seems more outrageous than ever. Service desks are all about optimising IT, minimising downtime, maximising productivity; they are also about allowing users to understand the process they are involved in. If and when confronted by IT gobbledygook they tend to freeze and switch off – exactly what a service desk is NOT designed to do.
Back to the training element, why should a service desk need to be staffed by highly-trained IT professionals? They are not the engineers who fix things when something really does go wrong, but simply the conduit. So it’s clear that a simplification of “the artist formerly known as helpdesk” was long overdue, as was both the deployment process at one end and the user involvement at the other. I’ve recently been testing V12 of ServiceDesk, a cloud/SaaS-based product from Richmond Systems in the UK, and this is precisely the angle Richmond has taken – surely a “Hallelujah” moment?
If simplicity came at the expenses of a lack of functionality, then that would hardly be a qualifier for a successful approach, but this – fortunately – isn’t the case. ServiceDesk V12 is a genuine next-generation solution which incorporates all aspects of service desk software requirements, and can be deployed in the cloud or onsite, whichever suits. Key areas covered include IT support – problem response and resolution – and service management, controlling every aspect of the IT infrastructure and asset management, so the entire IT estate is visible and supported. However, it looks to go beyond the physical IT estate into other critical areas of the business, such as HR and facilities management. Unless all these areas are integrated, then there is no complete solution – everything is inter-related to everything (and everyone) else within a company. This is a fundamental concept of the software. Importing existing data infrastructures is also simple and flexible. Active Directories are also fully supported in every aspect of the product.
The solution also looks to support historic infrastructure elements such as ITIL, but move companies into the modern DevOps-driven agile world of digital transformation. This would be pure marketing-ese, were the software not actually designed with exactly this in mind. In line with this “modernisation”, a complete Customer Service Portal (CSP) has been developed which hugely streamlines the support process, enables users to act for themselves and makes the whole solution extremely MSP (Managed Service Provider) friendly, with SLA support being a fundamental aim of the software. So, the entire ServiceDesk V12 solution is controlled by just two user interfaces – one for the service desk staff (or agent/CSP) and one for the users themselves to self-service (or CSP customers).
The system revolves around configuring business and user profile rules which then control data access with automated associations, links and options based upon these pre-defined rules. This is where automation and prevention of human error kick in, the two key enemies of the service desk and any similar application where internal and user facing worlds collide. Keeping it simple is key. A single internal-facing interface controls all of the ServiceDesk features, based around a menu of options on the left-side of the screen, which work in conjunction with a top menu of areas of the system to explore, these being: Assets, Incident, Knowledge, Service, Problem, Change, Portal, Utility and a Help function. Highlighting any of these options shows what sub-options are available; clicking on any option takes you to the relevant screen. The left-hand menu provides access to all the different view options.
On opening a page, clicking on any item either drills down to an information screen below or presents you with a list of further options. The interface is completely consistent in how it works throughout end to end processes. In addition, six large buttons in the centre of the screen take you automatically to different parts of the system, as defined – in this case, critical incidents about to fail an SLA, active incidents, incident status summary, current users logged in, user profile and dashboards and charts. On all screens, the pre-defined rules prevent an operator from entering random information that the rest of the support team then need to try and interpret – everything is designed with a logical flow. And it’s all in plain English, not ITese!
While the primary service desk facilities themselves make a strong enough case for Richmond’s solution, what really differentiates it is the additional CSP. For years IT has been trying to develop a successful customer-facing element to its solutions, but all too often these have simply been too complex for the users. Self-service is all about ease of use and understanding what to do. Anyone who tries to use automated cash-out services in UK supermarket will understand the trials and tribulations involved in perfecting such a system! What Richmond provides is a fully customisable, no programming required, portal that is as easy to create as it is to use, based around a series of onscreen tools and a workflow planner, the latter not unlike a traditional flowchart.
In order to make the CSP usable on any platform, Richmond uses specially designed adaptive web interfaces to ensure that support pages look good and are correctly presented to customers according to the screen size of the device that they are browsing with. Typically, when designing a website, you would need to consider how the site would look on different devices but with this approach the formatting is automatically adapted to the platform.
A workflow planner is very much at the heart of the portal design. Even large workflows (100 pages +) are loaded within a few seconds. The workflow planner uses coloured panels to indicate homepages (orange), portal pages (blue) and nodes (green) which show pages where incidents can be submitted, for example, at the end of a support path. A homepage editor allows customisable control of layouts, colours, fonts and functions. New panels can be added that run workflows, present incidents and change stats by status with drill down to underlying lists, and hyperlink to any external website. A services status panel can be positioned on the workflow homepage, and an HTML control allows any additional text/imagery/links to be positioned on the page. The Resource Centre on the Richmond Support+ Customer Portal has links to a very useful HTML editor, which allows you to copy and paste content from office documents and the HTML code is created automatically, so you don’t need HTML skills to use this feature.
The CSP editor allows you to fully customise the menu, including definition of colours, icons, labels, position, and the action that is carried out when a menu item is selected, including the ability to start specific workflows. Multiple styles can be created and changes made to a style will automatically be applied to the menu. Again, no coding skills are required to create fully personalised service hubs that give customers access to support, knowledge articles, company information, intranets and external websites. CSP pages can be enhanced with a panel control, which allows users to create clickable page elements that give access to incident lists, change lists and a wide range of support functions. Panels can have any combination of labels, descriptions, images and text, and can be presented in a single or two-column layout. They have a number of functions - for example they can link to workflows, support pages, websites, Knowledge Base articles and file downloads, and they can be used to add incidents and incident templates.
This combination of wide customisation options, combined with no programming capability requirements means that companies can create genuinely simple to use portals that are properly suited to their business and user base. This equally makes the CSP a perfect tool for MSPs servicing multiple clients from effectively a single platform – think multi-tenant buildings, for example – but with each client appearing to have their own portal. It doesn’t even have to stop at IT; the CSP is effectively a great user interface for any customer facing business. Any user base familiar with online services – e.g. Amazon – will instinctively be able to use the CSP; this is about as far removed from traditional enterprise software platforms as it’s possible to be – and in a good way! From an analytics perspective, the CSP records the entire customer journey, including any pages visited, controls clicked, and inputs given whilst raising a support request. CSP User Input Recording allows you to specify particular user inputs that are important to you, and these inputs are presented in a new tab on the incident record, and can also be automatically appended to the Incident description.
This approach is so far removed from the “helpdesk” of old – where most of the operators needed help to understand it! – it’s difficult to emphasise what a relief it is to see a solution that does what it says on the tine; make life easy for support staff to make life easy for their user base. I know – that’s just basic logic, but one of ITs great flaws is how it interprets base logic and turns it into “IT logic”. Here we’re back where we started – using the plain old English language and logic. Cryptic crossword lovers need not apply…
The way that businesses design and build their control rooms has changed significantly over the last few decades.
By John Halksworth, senior product manager, Adder Technology.
While the responsibilities inside of a control room environment remain the same — making important decisions in what can often be mission-critical situations — its design has shifted, to more fully consider human factors or employee wellbeing and its effect on operational efficiency. New technologies including IP-based, high performance KVM systems are being deployed to enable this.
Even if we do not work in control rooms ourselves, we are all aware of their importance across a range of industries. Air traffic control towers are often one of the most well-recognised examples of control rooms, responsible for facilitating the safe taking off and landing of hundreds of aircrafts each day. There is also the production and post-production studios you would find inside the offices of broadcasters, where video is distributed, edited and primed for transmission.
These are to name just a few industries reliant upon control rooms, and while each might look different, the purpose of the design within them is the same: to provide situational awareness within the purpose of control and monitoring.
Of course, when looking to design a control room, there are the obvious requirements that serve a critical purpose, such as monitors (and larger, wall-mounted screens if required), desks, chairs and physical computers. What some fail to properly consider, however, is the supporting and enabling technologies like KVM (keyboard, video, mouse), which often have the biggest impact on the efficiency and workflow of the control room, within their designs.
IP-based, high performance KVM solutions have grown to become a fundamental part of many control rooms, and is something that permeates across all parts of the environment to optimise workflows, improve ergonomics and increase efficiency. Specifically, IP-based KVM technology improves the control room in three unique ways.
Streamlined desk ergonomics
By deploying IP-based, high performance KVM at the desks of each end-user, the ergonomics of the space is dramatically improved and users are instantly able to control numerous computers with a single keyboard and a single mouse as the physical hardware can be located within a different room. This essentially means that if a user needs to quickly switch between applications or computers, this can be done without the need to move between user stations.
Also, for those that have multiple monitors situated at their workstations, more recent IP-based, high performance KVM solutions provide dual-screen support, while others give users the freedom to manage multiple computers by seamlessly scrolling across screen boundaries. This removes the switching process and further improves the user experience that KVM can deliver.
Taking advantage of collaborative workflows
Thanks to the nature of IP-based, high performance KVM, which allows users to swap between different physical computers from a single location, businesses are able to use their existing networks to their advantage to devise more efficient and collaborative ways of working. As businesses can benefit from the flexibility afforded to them through their IP infrastructure, workflows can easily be configured and scaled without the limitations associated with traditional KVM. Unlike other KVM solutions, IP-based KVM can be deployed anywhere that’s connected by the IP network.
This function is also the perfect solution for businesses looking to take advantage of the hot desking trend within control rooms. One of the biggest concerns for users when hot desking is a lack of familiarity with the machines and technology they’re using when sat in different places, but with KVM switching technology, they can feel instantly at home.
Keeping computers secure
IP-based, high performance KVM also allows businesses to centralise their computing resources into an access and temperature-controlled environment, with no disruption to regular operations. On one level, this decision means that there is suddenly a lot more space in the control room. It also reduces the security risk that comes with computers being in the same location where work takes place. On a more technical level, it means that the physical computers can be better looked after, with environmental factors such as dust and humidity moderated and monitored for an increased lifespan.
Together, these demonstrate the main ways that IP-based, high performance KVM is currently impacting the workplace in a positive way, and as technology, networks and work responsibilities continue to evolve over time, the benefits of KVM will shift in tandem.
Shadow IT is a term used to describe technology systems and solutions built and used by business units in enterprises without explicit organizational approval from the IT function. This article was written by Greg Smith, Michail Papadopoulos, Andreas Macek. Noémie Bristol-Courgeon and Elliot Gilford, all of whom work out of the London office of Arthur D. Little.
Shadow IT is becoming both pervasive and unavoidable across a wide range of departments within most organizations. Technology now allows business users to download their own digital solutions without the permission, participation, or even knowledge of the official IT department. There have been many negative stories around the consequences of this trend. However, if managed correctly, shadow IT can actually serve as a key enabler, driving innovation and rapid time to market, rather than becoming a sinkhole for effort and budget. Given that it will happen regardless of attempted central control, IT departments should therefore learn to embrace shadow IT as an essential element of modern business life – and be prepared to manage it effectively. In doing so, they will genuinely empower employees and start to demolish the traditional divide between the business and IT.
In this article we will look at some of the drawbacks and potential benefits of shadow IT, and how companies can go about reaping these benefits. We will focus on the software-as-a-service (SaaS) aspects of shadow IT, not because all SaaS solutions are deployed as shadow IT, but rather because SaaS is currently the approach most used by employees to install shadow IT solutions.
Why shadow IT is unavoidable
Enterprise software is set up and configured to satisfy the requirements and needs of the business, rather than those of individual users. The aim is therefore to deliver a consistent, standardized approach. However, in today’s highly personalized world, where the commodity off-the-shelf applications we use every day can be heavily customized, users now expect far more of the systems they use in their business lives. In many cases standardization has led to businesses deploying inflexible, bureaucratic, non-intuitive software applications, for which it feels as if the solution is the master and the employee the servant.
IT organizations, processes, tools and technology have evolved over time to address major project and business needs – such as delivering back-office efficiency through ERP software. However, the process of re-platforming from legacy technologies and ways of working to current-day needs has simply not provided the same level of personalization and user-friendliness that employees expect in today’s consumer-driven digital world.
By contrast, shadow IT is seen as fresh and new, using what is perceived by employees as leading-edge technology. It aligns perfectly with their demands and requirements, as it was set up by business users. In addition, shadow IT embraces the latest technologies via SaaS, platform-as-a-service (PaaS), infrastructure-as-a-service (IaaS), and other consumption-based models, and is agile by design – not as a costly retrofit.
Focusing on shadow IT using the SaaS model, it is obvious why users are embracing it:
The drawbacks of shadow IT
Press coverage of shadow IT has normally concentrated on its negative points, focusing on a long list of detrimental implications. In fact, the term “shadow IT” itself is most likely to be used by IT functions in a pejorative way. This is understandable, as traditional enterprise IT departments place a premium on control and centralization, and don’t like end users going behind their backs, especially when the implied message is that what IT provides is not good enough. Shadow IT is therefore normally seen as an unacceptable risk to the organization that needs to be actively eliminated, with the most common drawbacks being that it:
The benefits of shadow IT
Despite the long list of possible problems and opposition from IT, end users continue to see benefits from shadow IT, often crediting it as central to driving innovation, business transformation, and increased productivity. By embracing shadow IT, enterprises can realize benefits, including:
How businesses can manage shadow IT and reap its benefits
It is a natural response for IT to feel overwhelmed by shadow IT, and to therefore attempt to block everything and anything not directly sanctioned by the IT function. However, that will stifle innovation and productivity – businesses must recognize that shadow IT emerges as employees seek to be more efficient and take control of their working lives. It is not a conscious attempt to endanger or undermine the business. As such, IT must start looking at ways to manage and monitor shadow IT usage. The enterprise must be able to keep pace with today’s rapidly evolving business landscape, and that requires taking advantage of the cloud/SaaS revolution. It also requires a more collaborative approach across the organization, recognizing that technology innovation can no longer be the preserve of a single business department.
There are multiple methods that can be used by IT to pragmatically manage or channel shadow IT:
How embracing shadow IT led to 43% OPEX savings
A global education company, operating across the world, was facing rapidly escalating costs for its video-conferencing (VC) solutions, which were a critical part of its business environment. Arthur D. Little was brought in to identify cost inefficiencies with the current solution, assess alternative VC tools, and select the best tools to fit the organization’s global needs and digital strategy.
The client had been using the same VC tool for the past 20 years, and while it had been fit for purpose in the 1990s and undergone a series of upgrades and retrofits, it no longer met today’s business needs. This resulted in two types of behavior:
While assessing the tools available in the market, ADL also engaged with end users to identify their VC needs and real-world use cases, while offering an amnesty for shadow IT. Employees were encouraged to be honest about their VC experiences through interviews, polls and forums. Research and employee feedback pinpointed a specific tool that not only met business needs, but that a large number of teams were already using – and even paying for separately, unknown to IT.
The client therefore added the new VC solution to its existing platform, seamlessly integrating it with the company portal, help desk and email. Additionally, training was provided on the solution, through instructor-led sessions, quick-help articles and regular open engagement on the client’s internal forum.
Embracing shadow IT and adding the VC solution to the corporate platform brought annual OPEX savings of 43%, by eliminating duplicate payments and unnecessary additional services. It also greatly simplified internal processes, as the employees’ favorite solution was directly integrated into the corporate platforms, allowing for much richer functionality. This illustrates some key lessons that can be applied more broadly:
● Embracing shadow IT and listening to employees’ needs can unlock large-scale savings
● The earlier IT engages with users, the sooner costs can be reduced
● While not all shadow IT tools work for all users, it is still likely that some tools emerging from shadow IT will become the solution of choice for the whole business.
Insight for the Executive
Shadow IT is an accepted trend, with the majority of users already deploying SaaS solutions in their workplaces without the knowledge or sanction of the IT department. There is no way to reverse this – the reality of our cloud-pervasive, highly connected world is that shadow IT is the new normal within today’s enterprise.
IT departments must learn to embrace shadow IT as an element of modern life and practices that will happen regardless. Instead, they should spend their time, energy and budgets on tools, practices and training to properly manage shadow IT and effectively empower employees.
This means the IT function needs to take a more collaborative approach across the organization, and adopt new practices for managing shadow IT by effectively making it an integral part of the overall enterprise IT strategy. This can be accomplished by:
Shadow IT is effectively a paradigm shift in the modern world of enterprise IT that has created profound changes in the fundamental model of how IT departments must serve the needs of the business. As with all change, there are challenges to be addressed, but also ample opportunity for significant benefits to be gained.
[CB1]Direct Design – please create a graphic for this section
Take a New Look at 3-Phase Power Distribution.
Alternating phase outlets alternate the phased power on a per-outlet basis instead of a per-branch basis. This allows for shorter cords, quicker installation and easier load balancing for 3-phase rack PDUs. Shorter cords mean less mass, making them less likely to come unplugged during transport of the assembled rack.
When talking about 5G, thoughts often drift towards a new generation of connected mobile devices featuring faster download speeds. This is certainly a part of the 5G revolution.
By Brian Lavallée, Senior Director of Portfolio Marketing, Ciena.
GSMA Intelligence predicts an influx of 1.1 billion connections globally by 2025. However, the wireless communication aspect and the growing hype around end devices for consumer applications are only one part of the story and indeed a very small part in terms of making the UK 5G ready. For all the new smartphones, IoT devices, autonomous vehicles, smart cities, and wireless eHealth monitors, there is a requirement for widespread implementation of supporting wireline infrastructure, which will need to be met before next generation devices can be rolled out reliably, securely, and in mass volumes. This is because wireline networks interconnect wireless radios to each other and to data centres, where accessed content is hosted.
For network operators and service providers, this represents a major consideration, as well as a significant business opportunity. It will require substantial investment and planning upfront to ensure the continued viability and compatibility of future ready high-performance networks.
Rising expectations
One aspect everyone is quite clear on, is that the anticipated 5G use cases will transform business, leisure, and public services delivering an unprecedented user experience. However, to those in the telecom industry, it’s evident that today’s networks will have to be upgraded to address the greatly increased data requirements associated with 5G connectivity. The influx of connected machines – expected to be tens of billions in the coming years – will drive a surge in bandwidth demand. For applications such as autonomous vehicles, there’s an inherent requirement for low latency to ensure they can make decisions in near real-time for safety and security reasons.
Compared to previous generations, 5G networks will deliver significant increases in performance via a slew of innovative new technologies. With current projections of 100x higher user data rates, 100x more connected devices, 10x reduction in latency, 1000x higher data volume, and a perceived network availability of 99.999 percent, there’s a long way to go to achieve these targets. This is why 5G is expected to be a multi-year journey, and not a simple upgrade.
The bandwidth bottleneck
Following the approval of the 5G New Radio (NR) Non-Standalone (NSA) standard in December 2017, mobile operators and vendors can now test standard-based wireless technologies in the field.
However, this only represents the beginning of the 5G upgrade journey. Once a mobile device has communicated wirelessly to a nearby radio, the rest of the journey to the data centre – where accessed content is hosted, and everything in between – continues over a packet-optical wireline network. Thus, to provide the expected 5G performance gains, the entire network, across both the wireless and wireline domains, must be upgraded in harmony.
An opportunity for operators
Investments in wireline infrastructure upgrades will be key to ensuring future success for operators. Market demand for greater bandwidth and lower latency associated with new 5G use cases will accelerate, as more users migrate onto 5G networks. For operators, this means that upgrade strategies should be on today’s agenda, to ensure that these networks are ready to meet future demand as it grows. Furthermore, as operators work to improve their network infrastructure to ensure future profitability and market leadership, they can also seize on the considerable opportunities emerging from demand for advanced 4G technologies such as LTE, LTE-A and LTE-A Pro.
Mobile technologies will continue to play a vital socioeconomic role for the foreseeable future, as it is fast-becoming the de facto access ramp to content. However, to realise the required capabilities and achieve sustainable development of profitable networks, operators will need to adopt open and scalable network designs, allowing them to select the best technologies based on their target markets and supported use cases.
SDN and NFV technologies which will facilitate Network Slicing and other new features, will also have a prominent role in future 5G networks, enabling mobile network operators to logically partition and virtualise network resources seamlessly across both the wireless and wireline network from Layer 0 to Layer 3. Operators will be able to support various new 5G use cases, each with unique end-to-end performance characteristics, whilst providing the scale and automation required to deliver a superior end-user experience by implementing an adaptive network architecture.
The way forward
Operators are still focused on seizing 4G opportunities, as expansion continues across LTE, LTE-A, and LTE-A Pro applications, and will be for years to come. We expect both 4G and 5G technologies to coexist for the foreseeable future and as such, the key to remaining competitive in the emerging 5G market is to strategically leverage 4G services while upgrading the end-to-end network to support 5G.
For the majority of operators, the journey to 5G networks will take years and considerable investment. This is why it is so important to start developing an infrastructure upgrade strategy now.
A recent article by Steve Gillaspy of Intel outlined many of the challenges faced by those responsible for designing, operating, and sustaining the IT and physical support infrastructure found in today's data centers. This paper targets four of the five macro trends discussed by Gillaspy, how they influence the decision making processes of data center managers, and the role that power infrastructure plays in mitigating the effects of the following trends.
‘Agility’ has been presented as a cure for many corporate ills, including overcoming organisational rigidity.
By Fleur Bamber, Director, Agile Management, CA Technologies.
Many ‘rigid’ organisations continue to use 20th century systems that are top-down command-and-control, with burdensome processes of governance and tooling that reinforce compliance over creativity and enablement. Some businesses also begin to ‘do Agile’ and ‘embrace agility’ without a clear consensus on the true definition of this approach, or how it can help a company succeed in today’s fast-paced world.
Not another buzzword?
Let us face it. Agility has become a buzzword, following the likes of ‘synergy’. I, for one, do not mind whether ‘Agile’ is referred to in a capital-A way. What I do care about is applying lean concepts such as ‘inspect and adapt’, ‘limit work-in-process’ and ‘eliminate waste’. I care about better leveraging today’s best practices in product development to accelerate market responsiveness, by increasing something I call ‘organisational metabolism’.
What is organisational metabolism? In nature, it is the chemical process that occurs in living organisms by which they convert food into energy whilst eliminating waste. In business, this translates to the system that converts assets – including money, people, and time – into customer value, ideally while minimising senseless cost (e.g. bureaucracy).
Working smarter
Existing practices of simply working harder than the competition will not cut it. We face a new reality where customers expect continuous updates and innovations to what they want and need – particularly regarding anything digitally-enabled.
In order to keep pace and push ahead of the pack, organisations must shed their sluggish metabolism in favor of fleet-footed 21st century business, to deliver value fast. They can do this by developing the people, processes and tools that increase metabolism within their companies. Overlooking this new way of working will see companies miss market opportunities whilst competitors race ahead, leaving customers open to explore alternative solutions.
For organisations keen to embrace a new way of working, here are five key steps to consider that will help inspire a truly healthy, agile transformation:
1. Maintain a clear and aspirational vision
Having clearly-defined outcomes – both for your business and in your company’s agile transformation -- is vital, because an agile mentality means being comfortable with change. While tasks and tactics will shift, the ultimate vision should remain intact. Every company is different, but needs to have a vivid picture of its overall purpose and a desired outcome. Ask yourself: What is the outcome I care about, and what am I actually asking people to go and do? A clear vision will help guide the practices that need to be implemented as changes in the external market and internal operations fluctuate.
2. Consider culture
To attract and retain the best people, you must create an environment in which they can thrive. A decent environment is one in which the employee population is motivated, empowered and follows best practices that deliver value. Leaders should consider: How am I helping my front line? They are the listening posts – the people engaging with customers. Empowering the front line makes it easier for organisations to sense and adapt to change. ‘Top-down’ organisations are slowly becoming the dinosaurs, and organisations that adopt cultural change that empowers the front line and provides productive feedback loops will be the ones that succeed. Moreover, leaders must lead by example. Too often, leaders expect their employees to implement agile practices without changing their own behaviour.
3. Rethink processes
Many business processes and systems used today were created decades ago, under a guise of automating standard business processes. However, these standard business processes were built on adaptations of high-volume manufacturing of physical items. As companies improve their organisational metabolism, leaders must rethink any processes and systems are carried forward from the 20th century that may not support the faster responses required today. Today’s workplace not only needs process for governance and compliance, but also that for people and enablement – acknowledging the way people live and work today.
4. Use the best tools
The right tools can amplify leadership and vision, removing friction that all-too-often hinders transformation. Tools reinforce good processes and behaviours, quantify desired outcomes and support the team in making the correct adjustments. Even the most millennial company in the world suffers slow metabolism if it uses antiquated tools and systems. Whether a software development platform, internal employee time-management system, or new collaboration app, modern tools should reduce overall workload, not create more of it.
5. Be open to failure – learn and adapt
Employees should not feel like they will lose their jobs if they take a smart risk and fail. This should be obvious, but actually is not for most companies. Some level of failure is inevitable when the culture, processes and structures of an organisation are fundamentally changing. This is part and parcel of innovation. Calculated risks should be encouraged in modern companies. The important thing is to learn from any mistakes and become smarter. ‘Testing and learning’, not ‘perfection’, should be the new mantra.
21st century business
Increasing organisational metabolism requires enough discipline to examine the entire system, much the way a professional athlete continuously hones what they eat, how they train, the goals they set, and the attitude they invoke to become world class. Athletes do not ‘run in place’ – they continuously seek improvements and actively train, implementing practices that will see them first past the finish line. Similarly, companies must constantly strive to improve. Transformation is a challenging yet vital step for any company wishing to become, and continue to be, a success in today’s market. While it is not always easy, when companies, teams and individuals put in the effort, they will be in a far better position to compete and deliver for customers.