70% say cost savings are the key driver for digital transformation, followed by increasing profitability (58%) and increasing productivity (59%).
Ensono and the Cloud Industry Forum (CIF) has released results from the latest digital transformation study revealing that while cost saving was the most common driver for digital transformation, success metrics do not match. The research asked 250 business and IT decision makers in enterprise organisations across multiple vertical sectors about their digital transformation experiences. When asked about how they are measuring success only 51% said they were measuring cost savings, despite 70% saying this was a key driver. The most cited KPI was customer satisfaction with 52% saying they were measuring this versus just 40% saying it was a driver.
The top three drivers for digital transformation were cost savings (70%), increasing productivity (59%) and increasing profitability (58%). Drivers lower down the list included competing with industry disruptors (35%), differentiation (35%), speeding up time to market (33%) and customer experience (40%). When looking at sectors, interestingly manufacturing and IT see cost savings as much less important (55% and 50% respectively), with more of a focus on productivity and customer experience. Utilities saw cost savings as the biggest driver with 89% responding this way.
Simon Ratcliffe, Principal Consultant at Ensono said: “Digital transformation is fundamentally about business transformation. It is about seeing change – facilitated by technology and hybrid IT – as a revenue generator rather than a cost reduction function. Primarily, it needs to be seen as an opportunity for growth. Growth through innovation and the delivery of the best service, product and experience to customers and through finding new and quicker routes to market. The focus on cost savings is outdated and will negate transformation efforts, limiting its scope and impact. This could ultimately have longer-term implications for the business in the digital era.”
Almost all (99%) are intending to measure success in some way and the majority said that the current value achieved was either in-line with expectations or, in almost half of cases (48%), higher than expected. Business decision-makers are even more positive, with 65% stating that digital transformation is delivering better results than they had anticipated, though just 32% of ITDMs think the same.
“It is promising to see that nearly all organisations are measuring the success of their projects. However, the objectives and KPIs do not align, indicating a bigger problem. Either the strategy is not tied down and organisations are ‘doing’ digital transformation for the sake of it, or the strategy is not being communicated adequately. Whatever the root cause, for digital transformation strategies to succeed, the IT department, the business and the board need to have a clear and shared vision, and that vision needs to focus on people first, with the right technology facilitating,” Ratcliffe added.
Alex Hilton, CEO, CIF said: “When the Cloud Industry Forum first started researching digital transformation two years ago, it was a relatively minor concern, with only 51% having started the process. In this latest study, 100% of the organisations we spoke to reported that they were pursuing digital transformation and some 16% stated that they had already completed it. Great strides are being made, but organisations need to ensure they have the right vision, objectives and the appropriate measurements in place to ensure it delivers to the business.”
A recent OnePoll survey for Ultima has found more than half (57 per cent) of the UK’s SMEs fear big businesses use of robotic process automation will help to drive them out of business in the next five years, yet 10 per cent believe that automating repetitive, time-consuming tasks is not important to their success.
However, the survey found that two-thirds of businesses want to use robotic process automation. Sixty-five per cent of companies reported that they either plan to or already automate repetitive, time-consuming tasks. The financial services sector leads the charge, where more than 80 per cent of companies either plan to or already automate at least some business processes.
Scott Dodds, CEO, Ultima says, “Using robotic process automation (RPA) will give SMEs competitive advantage and if they don’t embrace it they will be left behind. It’s not surprising 57% of UK SMEs fear big business use of it will put them out of business, as until now the technology has been out of the reach of SMEs and was only available to the large enterprises that could afford it. But this has changed, we can now offer RPA to SMEs at affordable prices through our partnership with Thoughtonomy2, one of the world’s leading intelligent automation companies,” says Dodds.
The survey found that 77% of respondents want to use RPA to automate mundane, transactional tasks, with 56% saying freeing up staff time to focus on more strategic work was a key driver for using RPA.
“Importantly, RPA doesn’t necessarily mean job losses. McKinsey’s research3 has shown clearly that employees welcomed the technology because they hated the mundane tasks that the machines now do, and it relieved them of the rising pressure of work. We call our software robots ‘Virtual Workers’ as they are there to work alongside humans to do the work they don’t need or want to do. They allow SMEs to free up their staff to spend more time on strategic and creative projects that will give them a competitive advantage, while also improving productivity. In the longer term, as Professor Leslie Willcocks4 at the LSE says, ‘it will mean people will have more interesting work’,” says Scott.
UK businesses need clear direction to keep pace with the digital economy in 2018.
A survey commissioned by Interoute, the global cloud and network provider, has revealed that 51% of UK IT leaders are struggling to secure boardroom consensus for achieving digital transformation objectives. The UK result is significantly higher than the European average, which came in at 41% and more than any of the eight other countries surveyed.
“Without a clear route forward agreed by the C-Suite, many IT professionals will find it difficult to progress critical digital transformation projects, risking UK businesses being left behind,” explained Mark Lewis, EVP products and development at Interoute. “With Brexit on the horizon, the UK faces unprecedented change and uncertainty in the market and it’s fair to assume this is impacting decision making at the highest level. But it’s never been more important for UK businesses to haul their IT talent out of running just the day-to-day business systems, and into creating the business processes and customer experiences that will make their products and services outstanding.”
“It is vital to get buy-in at the highest levels of any organisation to make the changes necessary that will allow businesses to progress and keep pace with the digital economy,” Mark Lewis continued. “IT professionals need the support of the whole business to be able to deliver the digital foundation business needs for future growth.”
When ranking challenges for achieving digital transformation objectives, the 820 European IT decision makers surveyed were on average most concerned about their ability to integrate legacy technologies with cloud enabled applications (55%). This was closely followed by uncertainty around the changing political climate (52%) and a lack of talent/skills available to drive projects (52%).
The research showed there is a higher cost associated with digital transformation skills, with 61% of UK businesses, 63% of French businesses, and 66% of businesses in Denmark expecting to pay at least 20% more for these, compared with skills required for other IT projects.
“IT skills are in short supply for businesses across Europe, which is why leadership is needed to avoid being left behind in this new digital world,” Mark Lewis concluded. “To make the most of the opportunities presented by digital transformation, IT professionals must work together with C-Suite executives to act quickly and decisively. With the support, direction and consensus of the C-Suite, digital transformation can be accelerated to the benefit of the entire organisation.”
Study reveals regional disparities in adoption of cloud security: German businesses almost twice as likely to secure confidential or sensitive information in the cloud (61%) than British (35%), Brazilian (34%) and Japanese (31%) organisations.
Gemalto can reveal that while the vast majority of global companies (95%) have adopted cloud services, there is a wide gap in the level of security precautions applied by companies in different markets. Organisations admitted that on average, only two-fifths (40%) of the data stored in the cloud is secured with encryption and key management solutions.
The findings – part of a Gemalto commissioned Ponemon Institute “2018 Global Cloud Data Security Study” – found that organisations in the UK (35%), Brazil (34%) and Japan (31%) are less cautious than those in Germany (61%) when sharing sensitive and confidential information stored in the cloud with third parties. The study surveyed more than 3,200 IT and IT security practitioners worldwide to gain a better understanding of the key trends in data governance and security practices for cloud-based services.
Germany’s lead in cloud security extends to its application of controls such as encryption and tokenisation. The majority (61%) of German organisations revealed they secure sensitive or confidential information while being stored in the cloud environment, ahead of the US (51%) and Japan (50%). The level or security applied increases further still when data is sent and received by the business, rising to 67% for Germany, with Japan (62%) and India (61%) the next highest.
Crucially, however, over three quarters (77%) of organisations across the globe recognise the importance of having the ability to implement cryptologic solutions, such as encryption. This is only set to increase, with nine in 10 (91%) believing this ability will become more important over the next two years – an increase from 86% last year.
Proactive managed service contract signed for 5 of Daisy’s key UK data centre sites – reduced risk, increased capacity, 19% energy savings.
IT, telecoms and cloud services provider Daisy Group has saved over £115,000 of data centre cooling energy costs during the first few months after signing a 5-year data centre thermal optimisation managed service with EkkoSense. The service has been designed to reduce data centre thermal risk, increase cooling capacity and reduce cooling equipment energy costs across Daisy’s five UK data centres in Aston, Farnborough, Hamilton, Romford and Wapping.
The EkkoSense thermal optimisation managed service combines the company’s innovative Internet of Things (IoT) enabled sensors, SaaS 3D thermal visualisation and monitoring software, and data centre cooling optimisation skills to deliver an improved data centre thermal performance thanks to specialist airflow and cooling optimisation capabilities.
“We knew from past projects that EkkoSense is able to help us reduce our operational exposure to thermal risks. This was a key factor when we decided to extend our engagement to cover our five data centres as part of a managed service,” commented Michael Sheridan, Head of Facilities at Daisy Group. “So far we have secured better than expected data centre cooling energy savings, with £115,000 already secured, and more to come as we continue to benefit from ongoing optimisation. Clearly engaging with EkkoSense across our five sites has proved to be a smart decision for Daisy Group.”
“With our structured managed service approach to thermal optimisation, EkkoSense is providing organisations such as Daisy Group with access to new levels of data centre cooling optimisation and full thermal peace of mind” added James Kirkwood, Head of Critical Services at EkkoSense. “By applying the real-time, rack-level thermal data gathered by our IoT sensors we can ensure that Daisy is not only protected from potential data centre thermal risks, but that they are also in a position to develop much more proactive data centre power, cooling and space capacity strategies. It’s great that Daisy has already achieved such impressive data centre cooling energy savings through our managed service partnership, and we look forward to securing additional savings and benefits as the project progresses.”
Secure I.T. Environments Ltd has completed its data centre relocation project for Ridgeons, Timber and Builders Merchants.
The project involved the closure of a data centre, originally built by Secure I.T. Environments in 2007, and relocation to new facilities on an existing Ridgeons depot site, following the sale of land in Cambridge. Throughout its lifetime the original site was maintained and serviced by Secure I.T. Environments giving the company an excellent understanding of the client’s needs.
The new site introduced design challenges due to its smaller size and location, however, given the team’s close relationship with Ridgeons, a design solution was found and the associated risks with planning the transfer of services were mitigated. The project was completed in just 10 weeks.
New developments in server and cabinet technology meant no compromise had to be made on the compute and storage power available to Ridgeons, despite the new data centre site being smaller. The project included new incoming power supply, UPS in N+1 format, energy efficient air conditioning, structured cabling, access control / CCTV, a Novec fire suppression and detection system, 19” cabinets with intelligent power distribution, environmental monitoring, as well as general building works.
Secure I.T. Environments will also relocate the existing generator as part of decommissioning the original site, creating cost savings for Ridgeons. The full decommissioning project will involve taking out all the fire suppression, air conditioning, UPS, cabinets cabling and power. The team will also decommission the structure, ensuring that is securely and responsibly recycled, only disposing of materials when needed and in a way that will minimise waste and environmental impact.
“The team at Secure I.T. Environments is by far one of the best I have had the pleasure of working with,” said Nathan Draper, Group IT Manager at Ridgeons. “The decision to move a data centre is not something to be taken lightly and requires great planning if the day to day running of your business relies on those services. Secure I.T. Environments’ approach gave us great confidence that the risks were low, and it has all gone to plan.”
Chris Wellfair, Projects Director at Secure I.T. Environments, added “The relationship we have with Ridgeons has been a long-standing one that is set to continue. To be selected for their next data centre project ten years after we first started working with them, shows the depth of trust they had in us to deliver the project with minimum risk and value for money.”
Hub Network Services (HNS), Bristol, thas successfully migrated Internet Service Provider (ISP) Penguin to NGD’s South Wales mega data centre. This follows Cardiff-based Penguin’s requirement to relocate its rack servers to a modern colocation facility which also offered lower costs and a more convenient location.
A critical requirement of the project was avoiding changes to the extensive IP address base managed by Penguin on behalf of its customers who rely on various Internet-based services including those for web and email hosting as well as remote support.Research shows that managed services may be the only chance for growth in the IT industry’s channel; resellers are still switching to this sales model in large numbers, but the sales process and customers relationships are very different; and some of the technology issues need new skills.
Managed services in 2018 will need to deal with a number of issues – some, like security are external, others like the changes needed in sales processes and customer engagement and security, are internal factors. One the main pressures will continue to be the availability of skilled resources, both in sales and in the area of security, where GDPR, to be introduced in May 2018, will provide the main impetus for re-analysis of their positions by most MSPs.
Research from IT Europa ( http://www.iteuropa.com/?q=market-intelligence/managed-service-providers-europe-msps-top-1000) and others shows that there is a continuing race to scale as the economics of managed services depend on having a large customer base, but at the same time, because of the expertise needed to deliver specific vertical market applications, many are having to build on their strengths and specialist further.
The changing nature of managed services….
The bigger MSPs are growing fastest, says the research, and in Europe, the Netherlands has overtaken Germany in numbers of large MSPs. The Netherlands has seen a dramatic acceleration in the number of data centres situated there in recent years. The UK is still biggest, and now has 36% of Europe’s largest managed services providers and is the largest individual market. The technology is changing as well: when asked about what is on the horizon, MSPs say Internet of Things (IoT) has started to appear as an MSP solution area.
This latest study of Europe’s managed services providers shows increased consolidation as well as more specialisation by application area. In the study of the top 1500 MSPs 2017, the listed companies – 112 in number - saw their sales rise by 7.5% yr/yr. The smaller independents by contrast managed a lower 5.5% growth. One reason for the changes has been the rush for scale among managed services companies, with a high rate of mergers among the small players, and acquisitions by larger firms. There seems to be no shortage of available funding, either from the industry itself or venture capital.
These results and a wider discussion on the changing nature of the MSP will be a feature of the
Managed Services & Hosting Summit – Europe, at the Novotel Amsterdam City • Amsterdam on 29th May 2018 (http://www.mshsummit.com/amsterdam/index.php).
This is an invitation-only executive-level conference exploring the business opportunities for the ICT channel around the delivery of Managed Services and Hosting. Topics for discussion will include sales and marketing processes, GDPR, building value in a business with an eye on the mergers and acquisitions market, and skills development to get into those higher margin areas. This is a timely event as the rapid and accelerating change in the way customers wish to purchase, consume and pay for their IT solutions is forcing the channel to completely redefine its role, business models and relationships.
The Managed Services & Hosting Summit is firmly established as the leading managed services event for channel organisations. Now in its eighth year as a UK event, the Managed Services & Hosting Summit Europe is being staged for the second time in Amsterdam and will examine the issues facing Managed Service Providers, hosting companies, channel partners and suppliers as they seek to add value and evolve new business models and relationships.
The Managed Services & Hosting Summit – Europe 2018 features conference session presentations by major industry speakers and a range of breakout sessions exploring in further detail some of the major issues impacting the development of managed services.
The summit will also provide extensive networking time for delegates to meet with potential business partners. The unique mix of high-level presentations plus the ability to meet, discuss and debate the related business issues with sponsors and peers across the industry, makes this a must attend event for any senior decision maker in the ICT channel.
The next Data Centre Transformation events, organised by Angel Business Communications in association with DataCentre Solutions, the Data Centre Alliance, The University of Leeds and RISE SICS North, take place on 3 July 2018 at the University of Manchester and 5 July 2018 at the University of Surrey.
For the 2018 events, we’re taking our title literally, so the focus is on each of the three strands of our title: DATA, CENTRE and TRANSFORMATION.
The DATA strand will feature two Workshops on Digital Business and Digital Skills together with a Keynote on Security. Digital transformation is the driving force in the business world right now, and the impact that this is having on the IT function and, crucially, the data centre infrastructure of organisations is something that is, perhaps, not as yet fully understood. No doubt this is in part due to the lack of digital skills available in the workplace right now – a problem which, unless addressed, urgently, will only continue to grow. As for security, hardly a day goes by without news headlines focusing on the latest, high profile data breach at some public or private organisation. Digital business offers many benefits, but it also introduces further potential security issues that need to be addressed. The Digital Business, Digital Skills and Security sessions at DCT will discuss the many issues that need to be addressed, and, hopefully, come up with some helpful solutions.
The CENTRES track features two Workshops on Energy and Hybrid DC with a Keynote on Connectivity. Energy supply and cost remains a major part of the data centre management piece, and this track will look at the technology innovations that are impacting on the supply and use of energy within the data centre. Fewer and fewer organisations have a pure-play in-house data centre real estate; most now make use of some kind of colo and/or managed services offerings. Further, the idea of one or a handful of centralised data centres is now being challenged by the emergence of edge computing. So, in-house and third party data centre facilities, combined with a mixture of centralised, regional and very local sites, makes for a very new and challenging data centre landscape. As for connectivity – feeds and speeds remain critical for many business applications, and it’s good to know what’s around the corner in this fast moving world of networks, telecoms and the like.
The TRANSFORMATION strand features Workshops on Automation and The Connected World together with a Keynote on Automation (Ai/IoT). IoT, AI, ML, RPA – automation in all its various guises is becoming an increasingly important part of the digital business world. In terms of the data centre, the challenges are twofold. How can these automation technologies best be used to improve the design, day to day running, overall management and maintenance of data centre facilities? And how will data centres need to evolve to cope with the increasingly large volumes of applications, data and new-style IT equipment that provide the foundations for this real-time, automated world? Flexibility, agility, security, reliability, resilience, speeds and feeds – they’ve never been so important!
Delegates select two 70 minute workshops to attend and take part in an interactive discussion led by an Industry Chair and featuring panellists - specialists and protagonists - in the subject. The workshops will ensure that delegates not only earn valuable CPD accreditation points but also have an open forum to speak with their peers, academics and leading vendors and suppliers.
There is also a Technical track where our Sponsors will present 15 minute technical sessions on a range of subjects. Keynote presentations in each of the themes together with plenty of networking time to catch up with old friends and make new contacts make this a must-do day in the DC event calendar. Visit the website for more information on this dynamic academic and industry collaborative information exchange.
This expanded and innovative conference programme recognises that data centres do not exist in splendid isolation, but are the foundation of today’s dynamic, digital world. Agility, mobility, scalability, reliability and accessibility are the key drivers for the enterprise as it seeks to ensure the ultimate customer experience. Data centres have a vital role to play in ensuring that the applications and support organisations can connect to their customers seamlessly – wherever and whenever they are being accessed. And that’s why our 2018 Data Centre Transformation events, Manchester and Surrey, will focus on the constantly changing demands being made on the data centre in this new, digital age, concentrating on how the data centre is evolving to meet these challenges.
Angel Business communications have announced the categories and entry criteria for the 2018 Datacentre Solutions Awards (DCS Awards).
The DCS Awards are designed to reward the product designers, manufacturers, suppliers and providers operating in data centre arena and are updated each year to reflect this fast moving industry. The Awards recognise the achievements of the vendors and their business partners alike and this year encompass a wider range of project, facilities and information technology award categories as well as Individual and Innovation categories, designed to address all the main areas of the datacentre market in Europe.
The DCS Awards categories provide a comprehensive range of options for organisations involved in the IT industry to participate, so you are encouraged to get your nominations made as soon as possible for the categories where you think you have achieved something outstanding or where you have a product that stands out from the rest, to be in with a chance to win one of the coveted crystal trophies.
This year’s DCS Awards continue to focus on the technologies that are the foundation of a traditional data centre, but we’ve also added a new section which focuses on Innovation with particular reference to some of the new and emerging trends and technologies that are changing the face of the data centre industry – automation, open source, the hybrid world and digitalisation. We hope that at least one of these new categories will be relevant to all companies operating in the data centre space.
The editorial staff at Angel Business Communications will validate entries and announce the final short list to be forwarded for voting by the readership of the Digitalisation World stable of publications during April and May. The winners will be announced at a gala evening on 24th May at London’s Grange St Paul’s Hotel.
The 2018 DCS Awards feature 26 categories across five groups. The Project and Product categories are open to end use implementations and services and products and solutions that have been available, i.e. shipping in Europe, before 31st December 2017. The Company nominees must have been present in the EMEA market prior to 1st June 2017. Individuals must have been employed in the EMEA region prior to 31st December 2017 and the Innovation Award nominees must have been introduced between 1st January and 31st December 2017.
Nomination is free of charge and all entries can submit up to two supporting documents to enhance the submission. The deadline for entries is : 9th March 2018.
Please visit: www.dcsawards.com for rules and entry criteria for each of the following categories:
DCS Project Awards
Data Centre Energy Efficiency Project of the Year
New Design/Build Data Centre Project of the Year
Data Centre Automation and/or Management Project of the Year
Data Centre Consolidation/Upgrade/Refresh Project of the Year
Data Centre Hybrid Infrastructure Project of the Year
DCS Product Awards
Data Centre Power product of the Year
Data Centre PDU product of the Year
Data Centre Cooling product of the Year
Data Centre Facilities Automation and Management Product of the Year
Data Centre Safety, Security & Fire Suppression Product of the Year
Data Centre Physical Connectivity Product of the Year
Data Centre ICT Storage Product of the Year
Data Centre ICT Security Product of the Year
Data Centre ICT Management Product of the Year
Data Centre ICT Networking Product of the Year
DCS Company Awards
Data Centre Hosting/co-location Supplier of the Year
Data Centre Cloud Vendor of the Year
Data Centre Facilities Vendor of the Year
Data Centre ICT Systems Vendor of the Year
Excellence in Data Centre Services Award
DCS Innovation Awards
Data Centre Automation Innovation of the Year
Data Centre IT Digitalisation Innovation of the Year
Hybrid Data Centre Innovation of the Year
Open Source Innovation of the Year
DCS Individual Awards
Data Centre Manager of the Year
Data Centre Engineer of the Year
By Steve Hone CEO, The DCA
This month’s journal theme is ‘service availability and resilience’. The question I would like to pose is, which is more important to today’s consumer? If you walked in to a Starbucks and asked this question I am pretty sure most people would say service availability. No one really cares how their skinny latte arrives as long as it does arrive, and no one is interested in the technology that provides free in store Wi-Fi, they just want to check their email or stream the latest Game of Thrones episode.
Having said that, service availability doesn’t just happen by magic. I could easily imagine that most customers, unless they were weird engineers like myself, wouldn’t bother to sneak a peek back stage to see what really goes on behind the scenes and provides the seamless service they were paying for. If they did I am confident most would either be amazed or appalled by what they found. Let’s face it, no one likes poor service, it’s not good for one’s business or reputation which in most cases takes years to build and seconds to destroy. Funnily enough despite what seems to be an obvious risk to one’s business; organisations tend to respond very differently to this threat and often it’s the larger organisations which seem slower to respond often due to internal politics or processes.
Service availability is also not just about how quickly you can respond, it is also about how you respond, and this means understanding all the risks involved. By that I mean both the risk of doing something as well as the risk to the business of doing nothing at all. Many organisations tend to avoid the path less travelled thinking it will never happen to them or that lightning will never strike twice. I’m afraid to say this is nothing more than a myth and if you have any doubts a quick google search will return a sorry list of businesses who all thought they were invincible and who all equally have one thing in common, all resisted or feared change.
In business the drive for change is often resisted. There exists a clear conflict between the desire to progress and reluctance to change which is easy to understand. In the red corner you have the development teams who are always looking at how new technologies can drive the business forward; while in the blue corner you have the IT operations team, tasked with maintaining order and a stable infrastructure where change is often seen as a disaster just waiting to happen, thus creating a default “NO” response to any new ideas. This may well appear to be the safe option, but now more than ever businesses must equally recognise they need to evolve to survive.
I recall Roger Keen former MD of City Lifeline telling me once, if you had asked who would be the world’s largest music company twenty-five years ago it would have been EMI, Virgin or Sony. Twenty five years on and it’s turned out to be none of the above. Apple is now the largest music company in the world and ironically, it’s not even a music company, it’s a technology company.
Please don’t think for one minute that the answer to guaranteed success is just to invest in more and more of the latest cutting-edge technology. You would be missing one vital ingredient, namely “vision”. In the face of rapidly changing times it is the vision organisations have and the agile way they use the technology that ultimately decides who makes it to the finishing line these days.
This is nicely summed up by Charles Darwin’s “theory of natural selection” who said, “it is not the strongest or the most intelligent who will survive but those who can best manage change”. If maximum service availability is the ultimate prize in todays connected world the winners may NOT be the ones with the best technology but those who adapt best as technology changes.
Thank you again to all the members who contributed articles to this months DCA Journal, next month the theme focuses on ‘Industry trends and innovation’ and the call for papers is already out, so you have until the 13th February to contribute and submit your articles.
By Alex Taylor, Anthropology PhD student, University of Cambridge,
CNet Training recently welcomed Alex Taylor, an anthropology PhD student from the University of Cambridge, onto its Certified Data Centre Management Professional (CDCMP®) education program. Alex recently researched the practices and discourses of data centres. In this article, he outlines his research in more detail and explains how the education program contributed to his ongoing anthropological exploration of the data centre industry.
Traditionally, anthropologists would travel to a faraway land and live among a group of people so as to learn as much about their culture and ways of life as possible. Today, however, we conduct fieldwork with people in our own culture just as much as those from others. As such, I am currently working alongside people from diverse areas of the data centre industry in order to explore how data centre practices and discourses imaginatively intersect with ideas of security, resilience, disaster and the digital future.
Data centres pervade our lives in ways that many of us probably don’t even realise and we rely on them for even the most mundane activities, from supermarket shopping to satellite navigation. These data infrastructures now underpin such an incredible range of activities and utilities across government, business and society that it is important we begin to pay attention to them.
I have therefore spent this year navigating the linguistic and mechanical wilderness of the data centre industry: its canyons of server cabinet formations, its empty wastelands of white space, its multi-coloured rivers of cables, its valleys of conferences, expos and trade shows, its forests filled with the sound of acronyms and its skies full of twinkling server lights.
While data centres may at first appear without cultural value, just nondescript buildings full of pipes, server cabinets and cooling systems, these buildings are in fact the tips of a vast sociocultural iceberg-of-ways that we are imagining and configuring both the present and the future. Beneath their surface, data centres say something important about how we perceive ourselves as a culture at this moment in time and what we think it means to be a ‘digital’ society. Working with data centres, cloud computing companies and industry education specialists such as CNet Training, I am thus approaching data centres as socially expressive artefacts through which cultural consciousness (and unconsciousness) is articulated and communicated.
CNet Training recently provided me with something of a backstage pass to the cloud when they allowed me to audit their CDCMP® data centre program. ‘The cloud’, as it is commonly known, is a very misleading metaphor. Its connotations of ethereality and immateriality obscure the physical reality of this infrastructure and seemingly suggest that your data is some sort of evaporation in a weird internet water cycle. The little existing academic research on data centres typically argues that the industry strives for invisibility and uses the cloud metaphor to further obscure the political reality of data storage. My ethnographic experience so far, however, seems to suggest quite the opposite; that the industry is somewhat stuck behind the marketable but misleading cloud metaphor that really only serves to confuse customers.
Consequently, it seems that a big part of many data centres’ marketing strategies is to raise awareness that the cloud is material by rendering data centres more visible. We are thus finding ourselves increasingly inundated with high-res images of data centres displaying how stable and secure they are. Data centres have in fact become something like technophilic spectacles, with websites and e-magazines constantly showcasing flashy images of these technologically-endowed spaces. The growing popularity of data centre photography – a seemingly emerging genre of photography concerned with photographing the furniture of data centres in ways that make it look exhilarating – fuels the fervour and demand for images of techno-spatial excess. Photos of science fictional datacentrescapes now saturate the industry and the internet, from Kubrickian stills of sterile, spaceship-like interiors full of reflective aisles of alienware server cabinets to titillating glamour shots of pre-action mist systems and, of course, the occasional suggestive close-up of a CRAC unit. One image in particular recurs in data centre advertising campaigns and has quickly become what people imagine when they think of a data centre: the image of an empty aisle flanked by futuristic-looking server cabinets bathed in the blue light of coruscating LEDs.
With increased visibility comes public awareness of the physical machinery that powers the cloud mirage. This new-found physicality brings with it the associations of decay, entropy and, most importantly, vulnerability that are endemic to all things physical. As counterintuitive as it may seem, vulnerability is what data centres need so that they may then sell themselves as the safest, most secure and resilient choice for clients.
The combination of the confusing cloud metaphor with the almost impenetrable, acronym-heavy jargon and the generally inward-looking orientation of the data centre sector effectively blackboxes data centres and cloud computing from industry outsiders. This means that the industry has ended up a very middle-aged-male-dominated industry with a severe lack of young people, despite the fact that it’s one of the fastest growing, most high-tech industries in the UK and expected to continue to sustain extraordinary growth rates as internet usage booms with the proliferation of Internet-of-Things technologies. This also makes data centres ripe territory for conspiracy theories and media interest, which is another reason why they increasingly render themselves hyper-visible through highly publicised marketing campaigns. You often get the feeling, however, that these visual odes to transparency are in actual fact deployed to obscure something else, like the environmental implications of cloud computing or the fact that your data is stored on some company’s hard drives in a building somewhere you’ll never be able to access.
Furthermore, while cloud computing makes it incredibly easy for businesses to get online and access IT resources that once only larger companies could afford, the less-talked-about inverse effect of this is that the cloud also makes it incredibly difficult for businesses to not use the cloud. Consider, for a moment, the importance of this. In a world of near-compulsory online presence, the widespread availability and accessibility of IT resources makes it more work for businesses to get by without using the cloud. The cloud not only has an incredibly normative presence but comes with a strange kind of (non-weather-related) pressure, a kind of enforced conformity to be online. It wouldn’t be surprising if we begin to see resistance to this, with businesses emerging whose USP is simply that they are not cloud-based or don’t have an online presence.
And the current mass exodus into the cloud has seemingly induced a kind of ‘moral panic’ about our increasing societal dependence upon digital technology and, by extension, the resilience, sustainability and security of digital society and the underlying computer ‘grid’ that supports it. Fear of a potential digital disaster in the cloud-based future is not only reflected by cultural artifacts such as TV shows about global blackouts and books about electromagnetic pulse (EMP), but is also present in a number of practices within the data centre industry, from routine Disaster Recovery plans to the construction of EMP-proof data centres underground for the long-term bunkering of data.
With the help of organisations like CNet Training I am thus studying the social and cultural dynamics of data-based digital ‘civilisation’ by analysing the growing importance of data infrastructures. Qualitative anthropological research is participatory in nature and, as such, relies upon the openness of the people, organisations and industries with whom the research is conducted. Every industry has its own vocabularies, culture, practices, structures and spheres of activity and CNet Training’s CDCMP® program acted as a vital window into the complexity of data centre lore. It provided me with a valuable insider’s way to learn the hardcore terms of data centre speak and also with the opportunity to meet people from all levels of the industry, ultimately equipping me with a detailed, in-depth overview of my field-site. Interdisciplinary and inter-industry sharing of information like this, where technical and academically-orientated perspectives and skills meet, helps not only to bridge fragmented education sectors, but to enable rewarding and enriching learning experiences. I would like to sincerely thank the CNet Training team for assisting my research.
For more information go to: http://www.cnet-training.com/
By Dave Harper, Mechanical Engineer, DUNWOODY LLP
Availability and resilience are both terms that often arise when looking to data centres but are not necessarily always fully understood. Resilience refers to a single component of the system required to make a data centre function, this component can be as broad as the site wide electrical feed or as specific as the extract fans for a single room.
The common levels of resilience and their general meanings are in ascending order of resilience; N, just enough of the service to serve the full demand room and no more; N+1, enough of the service to serve the full demand room with one minimal maintainable component offline (such as a single room cooler or UPS unit); 2N, two independent systems either of which is capable of serving the full room demand, these should be located such as a large physical incident (such as someone making a big mistake with a forklift) is implausible to affect both; 2(N+1), two independent systems either of which is capable of serving the full room demand whilst having one minimal maintainable component offline. These can all be directly observed from the design of the data centre and barring shortfalls from the definitions which have been missed in the design are facts known in advance. Availability on the other hand is a very simple metric to measure once a data centre is operating but is at best an estimate at the design stage. Availability is generally expressed in 9s of uptime, that being the percentage of the time that the actual computing load of the data centre is available to its users starting from 90% which is one 9. 90% availability indicates an expectation of approximately 36 days of downtime over a year whilst 99.99% availability indicates an expectation of approximately 1 hour of downtime in a year or more likely 4 hours of downtime at some point during 4 years.
There are a number of competing standards established that attempt to establish a baseline expectation for the relationship between resilience and availability. The most commonly referenced are the Uptime Institute Tier system, BS 50600, Syska Criticality Levels and the American standard TIA-942. All of these work to a similar concept of the first 4 tiers/levels of expected availability/criticality looking to the weakest link of any given data centre to define its level. The principle being that additional spending in one particular area comes with significant diminishing returns compared to bringing all aspects of the data centre to the same level. As an M&E consultant we can only finalise the HVAC and electrical elements of the design but must always be aware that those should be fitting within a system of equivalent resilience levels for telecommunications, site security, staffing, etc.
Each of the components of the data centre has different resilience requirements and conceptual failure impacts at each of the tiers. BS50600 is split, for example, into Building Construction, Power Distribution, Environmental Control, Telecommunications Cabling Infrastructure, Security Systems, and Management and Operational Information. Each of those further subdivides into various components with differing requirements. The major question of any given element is how long, in the case of a failure, does it take to impact the IT load. The answers to that question can then be sorted into two simple categories those which, if identified as soon as it fails, the component could be fixed/replaced before it impact the IT load and those which could not be. Of the physical infrastructure components that go into building a new data centre very few fall into the latter category without resilience. Those that you might intuit would fall into the former category largely do so by not being required under normal operation, things like generators. The critical aspect that without an annotated checking process is easy to miss that in most cases of failures do not actually manifest until the equipment is called upon to operate under genuine load conditions making the earliest possible point to detect the failure a point too late to replace without impacting the IT without resilience. Whilst once it may have been possible to survive by propping open the hall doors and put some big fans in (so long as it happened on a cold day) with the significantly higher power densities and security and control requirements in modern data centres not possible. If the actual cooling equipment is no longer operating to handle the load the time to heat shutdown failure is measured in small numbers of minutes (and the IT equipment should probably be shut down before that point to avoid permanent damage). This leads to standards establishing what is considered adequate for each category in a broad sense based on whatever considerations of optimisation the standard writers had in mind for each normal sort of element.
This is all well and good within what has become the standard and common data centre, air cooling within a hall, but becomes more difficult to apply as one looks to some of the more recent developments in high density data hall solutions. The major consideration is direct liquid cooling of the servers where it becomes much more ambiguous where the line is drawn between a component being part of the main system which requires resilience under the standards and being part of an individual server which is less directly considered by the standards. Particularly this becomes difficult to judge for purposes of leakage since the risk is not only of pipework leakage, where under air cooling regimes the system could be shut down and a redundant system allowed to take over, but of individual server leakage which is much more challenging to shut down without impacting the data hall.
Whilst there are well established standards used to link resilience to availability as data centre methodologies develop and potentially become more diverse their status as guidelines rather than requirements becomes clearer.
For enquiries : mail@dunwoody.uk.com
The Market Directory is now live
The Public Sector has seen a massive increase in the utilisation for digital services in a drive to streamline and improve the efficiency of both internal operations and online services to its citizens.
This demand has resulted in an increased reliance on the data centre underpinning these services and is driving the need to seriously look at both existing infrastructure and the way services are delivered in the most cost effective and energy efficient way. Finding the right products and or solution partners to work with is often a serious challenge and reported to represent one of the largest barriers to adoption and change.
The EURECA Market Directory is designed to enable Procurement Departments, and ICT specialists within the Public sector to easily and quickly search for companies and organisations who provide, products and services which have been developed specifically for use within the data centre sector.If you are actively seeking business from the public sector and would like your organisation to be listed in the Public-Sector Market Directory, then please Click Here
To find out more about the activities of the trade association and how to become a member click here.
By Gareth Spinner, Director, Noveus Energy
What do we currently expect and what do we currently get?
The resilience of the power networks in the UK is generally very good unless you are supplied by an overhead line susceptible to extreme weather conditions or live in a remote part of the country or down the end of a lane with no chance of a second back up supply. There are a number of places which are known to be what is termed “worst served”.
Back in the 1990’s the newly privatised regulated Distribution Network Operators were denounced for both general poor network performance both on the duration of breaks in service and the frequency of events. The penalties they faced forced a big push to improve analysis of events, apply critical thinking and how to use technology to restore supplies more quickly, and how to design or modify networks to reduce any impact to the maximum number of affected customers quickest.
We now have a highly automated high voltage network which is close to being able to automatically switch loads to healthy circuits and keep customers with power quickly with very little human intervention or at least have engineers observe and check what the automation proposes.
From a resilience perspective and with the pressure on overhead lines not being the favoured, mostly for aesthetic reasons, most new network is installed underground in public highways where the biggest impact is someone else working in the road or construction damages a cable. There is an inevitability that the +60-year-old cables will start to fail but have not materialised just yet.
Availability is subtly different, as power capacity is naturally constrained by how much capacity is accessible and what load exists at any one time. Load customers have historically enjoyed having a known fixed capacity that they can call upon with very little constraint.
The electricity network circuits and transformers have historically been developed to provide power from a National Grid of very large power stations and distributed across the UK. This is changing due to the growth of renewables which are disparate and smaller in size giving rise to other issues on capacity with generation export competing at times with load to use the circuits. The DNOs are having to evolve into System Operators (SO) where they have to manage a dynamic situation in real time.
In network history terms most DCs have been connected in very recent times. Owner/Operators have in the main been very diligent in analysing and risk assessing their grid connection. As a matter of course, most DCs will have the standard N+1 grid connection (or better), and will be acutely aware that this means no loss of supply for a single fault. Beyond this there is a level of generation back up. The Distribution Network Operators (DNOs) under their licence have to offer this level of service under the security of supply standards.
It is not the case that resilience receives the same level of scrutiny and many Owner/Operators would not know how to approach a review of how resilient their supply was beyond the point of connection.
Resilience is primarily the ability of the network to resist the effects or impacts of “environmental” conditions or its general degradation due to “wear and tear” or aging. The overall resilience of a service connecting a DC is made up of many aspects and attributes of the constituent parts.
Weather Effects
The weather has a huge potential impact, snow and ice on overhead cables, water ingress into underground cable systems, wind blowing debris into the network and third party damage are the main causes of disruption to power supplies. The network operators have no control of the weather but can design their networks to avoid the impacts; what challenges them the most is the cost benefit analysis of the options and making an assessment on what is a reasonable level of disruption to power supplies that they can front up to the Regulator on.
The DNOs do get a dispensation for Extreme Events, storms and floods beyond just a normal windy day and thus it is the response to extreme weather and the level of adverse publicity which dictate how quickly all power supplies are restored.
Aging Effects
The UK electricity network is now over 80 years old although most of the older components have been replaced there are still considerable elements that are over 50 years old compared to an original design life of 40 years.
It is also the case that factors of safety and “conservatism” have stood the DNOs in good stead. The analysis of the network condition through non-intrusive diagnostics has provided better Health Indices (HI) for each major circuit and components.
Components will age due to the everyday use and in non-normal situations due to over loading, overheating, repeated operations on switchgear, water ingress into transformers the list is almost endless. The ideal world would be that the DNO could predict the residual life in any component and replace it or fix it prior to a failure but as yet there is not enough good data to be able to do this with any great certainty.
The HI work is also evolving and with Smart Grid and more Research and Development going into the DNO “business as usual” we can hope for fewer losses of supply due to aging.
Another aspect of the electricity network is the expectation of the Regulator that DNOs make networks more efficient this means lower losses (saving carbon) with higher load factors, circuits and transformers operating closer to their design capacity and less investment in reinforcement. This could accelerate aging of some network elements especially since commercial pressure on procurement (which is at European or Global) generally means that equipment is designed and tested tightly to the specifications; there is no evidence as yet on life expectancy although it is hoped at least 40 years!
The UK is witnessing a general warming effect, more extreme weather happening more frequently. These events may not in the short term have a huge impact on a DC as the detrimental effects will not cascade to cause too many issues.
Without intervention and over time the network resilience will degrade as the extreme events and higher loads on the networks increase. The impact of higher density loads would suggest the impact will be greater too.
The DC Owner/Operators have to date been conservative in estimating their load requirements and thus overloading under normal operating conditions is not a contributory factor.
However, the changing landscape of localised renewable energy generation and storage and balancing this at local level is driving the change on how power networks are controlled and operated and the change to having System Operations down at a local level within Licence areas or smaller “islands”.
For this to be worthwhile there will need to be a lot more intelligent technology on the power networks, Smart meters will provide the load data at point of supply in real time which with a few algorithms will give the SO information to decide how to move load and running arrangements. In addition, real time data on small scale generation can match the load requirements and with that some balancing for abnormalities. Time of Day loads such as EV charging can then be controlled and regulated if the SO is permitted to do.
If the SO has also invested in better and more sophisticated monitoring equipment they will be able to assess the load capability of each circuit and transformer and actively manage the load to avoid faults and send resources to fix weaknesses before a fault occurs.
So there is potential for a lot more switching operations on the network which could increase maintenance and repairs; resulting in equipment off-line and more “holes” in the network leading to a slightly higher chance of a loss in supply if the alternative circuit or transformer trips.
The relatively large DC loads will potentially be part of these smaller and locally controlled networks and the DC Owner/Operator can play an active part in balancing loads and generation, and potential for Demand Side Management.
The challenge is what are the Regulation rules for the new SO? Is a large load customer such as a DC affected? Does this level of flexibility have financial implications?
A DC business on a single site will be concerned about resilience even though they have little ability to do anything about it, but can have reactive stand by generation.
A multi-site DC business can determine what their strategic view is for resilience and how they position themselves and have IT resilience across diverse sites and worry less about the individual site. This level of flexibility could work in their favour in managing the ever increasing cost of energy.
In the Smart Network world the multi-site DC could also be an active part of the SO, maximising potential for more efficient network operations and reducing costs.
On the proviso that the SO is mandated to provide the information, it will be possible to get the level of granularity on each and every circuit and transformer performance, the DC Owner/Operator will be better placed to take decisions on resilience. The SO could provide any diagnostic information on the health of the network at local level and thus influence the discussions on how resilience and service availability can be improved. If the SO is able to provide real time information, no need for a call to tell the DC Owner/Operator by phone calls, and thus decisions taken on energy usage, balancing, generation etc.
The DC has an absolute need for security of data for every customer, and resilience that information is always available on demand and essentially back up plans for the “what if” moments when a problem event occurs; the electricity market evolution is a long way from providing certainty other than staying as it is.
The lack of available capacity based on every consumer having an absolute amount of capacity whenever they need it is not going to be a sustainable proposition in the not too distant future; new connections are getting more and more expensive. If all consumers requirements are “pooled” at local level a high degree of diversity could be possible and with Smart solutions electricity could be delivered to where it is needed when it is needed but without a firm contracted available capacity 24 hours each and every day. Would this be acceptable to the DC?
The DC operators can take action themselves and consider different solutions. The Grid connectivity carries large capacity charges but generally the grid connection is the prime and reliable source of power. Standby generation is the back up to the grid connection, under the SO the inter-changeability of which source is used becomes easier and the level of available capacity could be traded? What then for the DC that has an absolute requirement for power when it is needed? Does the DC strategy change for each site, having a backup supply beyond N+1 to having prime generation supply, grid charge avoidance and even duplicate data storage on disparate sites?
Should it not be the case that the SO should be more discerning on how capacity is actively provided and the resilience managed in real time with condition knowledge to communicate in real time with the DC on the network status?
How the DC views generation or storage is a choice. Or is a partnership with the Smart SO the right way ahead? The DC with its own generation could be part of a more socialised community energy scheme and support the regeneration of communities and the ambition for more efficient homes.
For more information go to: https://noveusenergy.com/
By Luca Rozzoni, European Business Development Manager, Chatsworth Products (CPI)
Improving service availability and resilience is a never-ending quest for today’s data centres. However, to make real improvements there must be a change in the traditional approach to rack power distribution and monitoring.
Integrating intelligent products into the data centre’s design is critical and, when selecting rack power distribution units (PDUs) for high-density applications, there are a number of key considerations which can directly affect future levels of service availability and resilience.
First consider the incoming power and the installation of the appropriate input circuit to handle required capacity. Power from the utility to the data centre is typically three-phase. Whilst it is possible to bring either three- or single-phase power to the cabinets, three-phase power allows required power capacity to be delivered at a lower amperage, helping minimise losses and simplify load balancing across all three phases of the incoming power into the data centre.
Data centres and cabinets are typically designed and specified prior to deciding which IT equipment will be installed. Selecting an intelligent PDU that provides a good mix of C13 and C19 outlets is advisable, so that it will be able to support a wide range of equipment and densities. In addition, ensuring there is a locking feature on the outlets will help to prevent accidental disconnections of IT equipment.
Intelligent PDUs that draw greater than 20A of current typically have two or more branch circuits protected by an overcurrent protection fuse or breaker. Selecting a breaker over a fuse is highly recommended as a breaker can easily be reset when tripped. A fuse, in the same circumstance, must be replaced and power will remain out until this has been completed. As replacement requires the PDU to be turned off until it is serviced by an electrician, so the result is a higher Mean Time to Repair (MTTR).
Whilst the breakers can be thermal, magnetic or magnetic-hydraulic, a magnetic-hydraulic breaker is the least susceptible to temperature changes, and will minimise nuisance tripping, making it ideal for high-density applications.
When reviewing branch overcurrent protection, it is also worth considering:
· Slim profile breakers to ensure minimal interference with airflow within the cabinet
· The ability to remotely monitor the status of the circuit breaker or fuse irrespective of the type of PDU selected.
Modern, high-density data centres often increase server inlet temperatures, which translates into higher server exhaust temperatures. This helps to maintain top levels of efficiency and lower energy consumption costs. Many data centres also deploy containment solutions to fully separate hot exhaust air from cooling air to further optimise efficiency.
To ensure that the PDUs operate reliably in these higher temperatures, it is worth selecting a PDU with the highest temperature rating possible. The chosen PDU should also support full load capacity at the rated temperature.
It’s also imperative that the proper environmental conditions and levels are closely monitored and maintained. Fluctuations in air quality, temperature and humidity need to be avoided, while water, dust and other harmful particles can all affect the infrastructure, shorten the operational life of expensive equipment and result in downtime.
Environmental monitoring solutions now have the capacity to help your organisation monitor temperature and humidity, as well as smoke, water and even motion detection. Choosing an intelligent PDU to enable a cabinet level ecosystem and issue proactive notifications or alerts, will help data centre managers ensure service reliability.
Selecting intelligent PDUs with the features mentioned above will help data centre managers ensure service availability and efficiency. Two other underestimated features will also provide significant savings in networking costs and deployment time, IP consolidation and physical security.
Many intelligent PDUs available today have the ability to be arrayed (networked) using one IP address. When selecting a PDU, look for IP consolidation capability that networks the most amount of PDUs into the array so that the fewest amount of networking ports needs to be deployed. There are PDUs in the market today that can be in a 32 PDU array.
With information quickly becoming the most valuable asset organisations own, the need to push security and authentication to the cabinet level has become more critical. Selecting intelligent PDUs that have built-in integration with electronic access control (EAC), allows the remote management and control of every cabinet access attempt. This also ensures the data centre and IT department is meeting growing compliance regulations such as PCI-DSS, FISMA and HIPAA.
As the PDU represents the first line of defence, integrating intelligent products into a data centre’s design – and ensuring the careful selection of the most appropriate intelligent PDU for such a high-density environment – is key to ensuring the data centre of the future will be able to continue to offer greater service availability, resilience and security to cope with the growing demands being placed on it.
For more information go to: http://www.chatsworth.com/
By Mick Payne, Group Operations and Purchasing Director, Techbuyer
How to leverage on existing technology to maximise storage power, availability and resilience.
Data centre managers are faced with the same problems as all IT professionals this new year: how can I get more performance with fewer resources? And how can I insulate my systems against threat, malicious attack or other disaster?
These are thorny issues in the face of very real threat. At the end of last year, three young American men in their twenties pled guilty to the Mirai Botnet attack that took down internet services firm Dyn, whose clients include Twitter, Reddit and Spotify, and took down the electricity supply for many across the East Coast of the US. The case was huge and proved one thing beyond doubt: some attacks are indefensible.
Deliberate Denial of Service [DDoS] like the Mirai Botnet attack is just one threat amongst many. Just as dangerous to a data centre is the proverbial tree taking down the power line, for example. The upshot would be the same in both cases: systems go down and data is at risk of being lost. Data centre managers need to mitigate against these risks and make sure their facility recovers from it as soon as possible. They need resilient data storage system solutions that maintain service availability.
Sharing data over multiple servers creates an option to have backups in multiple geographical locations. If one data centre is compromised, the other site provides resilience with replica systems. This solution has been attractive to large corporations where loss of functionality affects critical business functions. However, the cost benefit to small and medium sized businesses is impacted by the price of the software and additional hardware.
On the cusp of 2018, there was the usual slew of industry insiders and journalists suggesting their storage predictions and growth trends for the coming year. High up there was software-defined storage (SDS), which builds resilience and increases compute power in data centre environments by replicating data (whether that be object based storage or file based storage) across multiple servers. SDS used to be seen as an expensive solution, but with the right choice of hardware it becomes much more cost-effective.
SDS technology has been in use for around ten years, predominately as tier-2 storage and has been popular with large corporations with deep pockets. The attraction is that it enables systems to increase their functionality without losing speed or power by replicating data over multiple servers, which then operate concurrently. This has obvious benefits when it comes to failovers. It is also great for scalability and flexibility, which is particularly important given the fast rate of technology adoption that exists now and is set to continue over at least the next ten years.
A major brand legacy system complete with software would cost around £300,000 for 1pb of storage, which is a relatively large capital outlay. The choice of white label commodity servers could halve that amount to around £150,000. However, many IT professionals are reluctant to choose lesser known brands because of perceived risk. The ideal scenario is to source branded servers like HP, Dell and Cisco at a reduced price. This is where the refurbished option comes into its own.
Quality refurbished product from companies like Techbuyer comes with free configure-to-order service and a manufacturer-matching global three year warranty. Organisations large and small are increasingly seeing the value of this in terms of cost savings and competitive advantage. The trend is helped by tech giants using refurbished product in their data centres.
Google released a report in 2016 called “Circular Economy at work in Google data centres”. In it, the company stated that 19% of its servers were remanufactured machines, 75% of components consumed in the spares program and 52% in the machine upgrades program were refurbished inventory during 2015. The company has since publicly stated that this is saving them hundreds of millions of dollars a year.
With companies of this calibre subscribing to refurbishment, the sector is going from strength to strength. Techbuyer’s experience is a steady year on year growth since it began specialising in refurbished stock 13 years ago. 2017 was the best year yet. The company’s revenue rose from £18.2m to £27.4m, the workforce doubled from 50 to 102 and it increased the number of products available from 150,000 to 225,000. The company also opened two new offices: one in Germany and a second site in the US.
A lot of the success is down to good management and a great team of people who pride themselves on the quality of product and service. But much as though we would like that to be the whole reason for success, it isn’t. Techbuyer is successful because refurbished product is the smartest solution for data storage needs. More and more public-sector bodies, small, medium and large companies are waking up to this fact and our business is growing as a result.
There has been a vibrant resale market for many years. Product age varies from just out of the box component parts to systems that have been bought in from large organisations carrying out upgrades. A model that is two years old is a fraction of the cost of the latest generation from the factory and yet offers comparable functionality. Just as many individuals are reluctant to buy a car straight off the production line, many companies no longer see the value in buying the latest generation data storage equipment. All the more so since Techbuyer offers to buy customers’ old equipment at the same time.
The decision to choose refurbished equipment has the added benefit of being more environmentally sustainable than buying new equipment and scrapping old. With data storage under ever closer public scrutiny on its use of resources, this is an attractive add-on for many organisations. Data storage giants Microsoft, Facebook and Amazon have been lauded for their strides towards sustainable energy sources in their data centres. Bodies like the DCA have been taking a proactive role in leading the discussion on making best use of resources.
The choice of refurbished data storage equipment feeds into this movement towards better use of resources. It is worth remembering that the average desktop computer consumes ten times its weight in fossil fuels. The production of one gram of microchips consumes 630 grams of fossil fuels. Although this pales into insignificance in comparison to the energy used to power data centres, it is not an insignificant amount. This, as much as cost saving, is what is driving Google and others to increase its use of refurbished parts. Savings on the bottom line are good for the planet too.
For more information go to: www.techbuyer.com
I’m often asked if OpenStack can fit into High Performance Computing strategy and if so where. To answer that question, I would like to start by stepping back a bit and focus on what our customers are trying to achieve rather than the technology.
By Andrew Dean, HPC Business Development Manager at HPC, Storage and Data Analytics Integrator, OCF.
For the majority of our customers it is to realise the highest research computing throughput for their investment (to get a good return on investment) and as short as possible ‘time to science’. Most of our users on the systems we supply are scientists, researchers (chemists, bioinformaticians, physicists) and engineers. We understand that their business isn’t in computing, computing is ‘just’ a tool like any other.
To achieve a high research computing throughput and a short time to science there a few things you need to think about. Sufficient compute capability - this could be on site, or in the cloud, but somewhere there must be adequate compute/storage/networking to crunch these numbers.
High utilisation – when you’ve got access to compute capacity, now you need to use all of it, as much of the time as possible. With more traditional HPC user groups such as physics, chemistry and engineering this has been relatively easy to achieve with well understood applications and excellent schedulers.
A traditional HPC cluster with a fixed software stack (OS/Scheduler/Libraries etc) manages to keep the majority of users happy most of the time; these kinds of environments (assuming sufficient workload) are often achieving utilisation in the 90 percent range, which is great as it shows these customers are making the most of their significant investment. For these users, ‘time to science’ is pretty good too. Once the service is up and running it remains a stable resource with software updated during planned downtime periods over its 3-5 year life span.
Now, I said the majority of users are happy most of the time – who are the minority that aren’t?
There are many use cases but some examples could include users that need Ubuntu (open source software operating system), when the traditional cluster is running on RedHat. Users may have a commercial application that is only supported on one scheduler (such as Grid Engine - commercially supported open-source batch-queuing system for distributed resource management) but the cluster is running another (such as SLURM - open-source job scheduler for Linux and Unix-like kernels)Other issues include needing a feature in the very latest version of an application, but the main system is running on the more stable version behind. Or, they need a Hadoop cluster for a few months but the organisation doesn’t have a service available
This is where OpenStack fits in, building a flexible service to meet the demand for ‘edge cases’ that are, in most cases, currently being met by poorly utilised dedicated hardware either user or group owned workstations or rack servers.If you would like to have a chat about your infrastructure or are considering OpenStack, please get in touch.
There’s no denying that there are potential advantages to choosing cloud storage for your business. However, not every cloud has a silver lining. In this article, Stephen Parker, founder and CEO of live chat provider Parker Software, dispels some common misconceptions about cloud-based storage and what disadvantages it poses when compared to a traditional on-premises model.
Cloud-based storage is often portrayed as one of the most cost-effective hosting models. However, that’s not always true. In fact, compared to traditional, on-premises storage, 48 per cent of IT decision makers think that enterprise cloud applications and services are too expensive.
One of the common issues with cloud-based storage is the high cost of data centre outages. Businesses are currently losing $700 billion a year to IT downtime, and data centre outages can cost businesses almost $9,000 a minute. With average businesses experiencing five downtime events every single month — that’s an average of 27 hours of — it’s no surprise that there a certain level of dissatisfaction with the cloud.One third of organisations have experienced high costs and poor value with cloud services. And yet, the perception that cloud is cheaper persists. Generally speaking, the initial set up for cloud-based storage models is low-cost, but the potential problems with this storage following its installation can sky-rocket the price associated with it.
Cloud storage providers are quick to defend their platforms against customer concerns for security. Despite their efforts to promote the cloud as a secure option, just 13 per cent of organisations admit to completely trusting public cloud providers to keep their data secure.
One of the biggest challenges for cyber-security is determining the safekeeping responsibilities of the client and the cloud provider. 34 per cent of IT professionals believe that senior management teams fully understand cloud security risks — but it’s dangerous for them to assume a business is fully in-the-know about the associated security dangers.Statistics show that 43 per cent of organisations do not use encryption or anti-malware in their private cloud servers and 40 per cent fail to protect files located on SaaS with any data loss prevention techniques.
Under the right conditions, the cloud can be a secure option for data storage and, currently, 76 per cent of IT decision makers highlight security as their primary concern about cloud-based services.As the lifeblood of every modern organisation, IT is a business enabler. While effective IT boosts productivity and your bottom line, unplanned downtime puts your business’s profitability and reputation at stake, as proven by the IT outage at British Airways. A power surge at one of BA’s data centers resulted in a communication breakdown across all of its systems resulting in over £170 million wiped off the market value of British Airways in addition to a compensation bill of up to £150 million.
By Sneha Paul, product consultant at ManageEngine.
One recent study found that most UK firms experience an average of 43 hours of downtime per year, costing roughly £12.3 billion in lost revenue. Further to this, search engines downgrade a page's ranking if the site is slow or provides poor end-user experience. This is why it’s important to have resilient IT services and to consistently monitor the uptime of your applications, data availability, and website. The following methodologies can help you establish that resilience and keep your IT up and running.
1) Bolster continuity with website performance analysis
Another report states that the top 50 retailers in the UK incur £1 billion in lost revenue because of poor website experience. A website’s performance metrics directly correlate with a company's KPIs, such as conversion rate. Additionally, errors such as “page not found” not only drive customers elsewhere, but also squander investments in paid search and online marketing.
With a website monitoring tool, businesses can proactively audit performance benchmarks such as website response time split by DNS lookup time, connection time first byte download time and individual page element load time. It is also important to monitor uptime rate, availability, and redirection time and to check the content in your site, setting alerts to notify you whenever specified keywords are missing or there are any hacking attempts.
Perform these checks as often as once per minute to ensure that you stay ahead of any performance bottlenecks. Performing each check from multiple geographic locations simultaneously will ensure that end users in all geographies are getting the best possible user experience.
2) Avoid network outages with round-the-clock vigilance
Network infrastructure is crucial for any business. Around 72 percent of firms in the UK experience up to eight network outages annually, which extrapolates to £521 per employee in terms of lost productivity.
Network surveillance software helps businesses monitor the response time, packet loss, and other performance metrics of network devices such as physical and virtual servers, routers, firewalls, and switches. This helps to keeps tab on the bandwidth consumption of your users and applications. Single alarm consoles raise issues related to CPU usage, buffer hit rates, UPS battery status, network availability, wireless signal strength, and other critical performance metrics.
With round-the-clock vigilance, you can be notified about any anomaly instantly via email or text. The software even gives you a root cause analysis report, helping to troubleshoot the issue faster so you can avert a major disaster before it even raises its ugly head.
3) Audit application performance KPIs to avoid blackouts
Applications, whether customer-facing or supporting in-house operations, drive the functioning of an enterprise. End users have little tolerance for dysfunctional, slow, or broken applications, as proven by the frenzy that occurred when the popular instant messaging application WhatsApp suffered a global outage.
Using application performance monitoring (APM) software, you can assess the key performance indicators of your applications such as memory utilisation, database connection time, resource availability, or throughput. You can also breakdown and analyse application performance based on browser, geography, ISP, and other parameters. This gives a perspective based on the end-user experience, thereby helping to improve customer service.
APM also puts predictive analytics into practice by alerting you of any ensuing performance deterioration before it manifests into a full-blown issue. To top it all off, you can automate the process of resolving the issue right away through auto reboot or deployment of corrective scripts or programs.
4) Increase reliability with cloud infrastructure monitoring
The adoption of cloud computing in the UK has increased by 83 percent since 2010, according to research by Cloud Industry Forum. The most widely used cloud service is web hosting, which is all the more reason to have unified visibility of your cloud infrastructure.
With cloud monitoring, businesses gain detailed insight into the health and performance of cloud solutions. You can analyse your virtual machines' workload as well as audit the resource utilisation of your virtual data center. You can also baseline this analysis data, so if there is any disk capacity shortage or a memory leak you instantly receive alerts to help contain any disruptions.
Additionally, leveraging the SLA management capabilities of cloud monitoring tools helps ensure that your service provider consistently delivers on its service commitments. Having consolidated visibility into your cloud network ensures greater reliability and optimal performance.
Embrace the smarter solution
In this era of digitalisation, accessibility is promised anywhere and anytime. That means downtime and outages can adversely affect your SLAs. Implementing a robust and efficient framework to keep an eye on your IT environment eliminates the occurrence of any operational disruptions which puts your brand value at stake. While deploying a software suite that implements the above functionalities takes months, you can achieve the same results in a shorter timeframe with the right tool. The key lies in finding that single comprehensive solution that realises your goal of ensuring 24/7business continuity.
The last year has been a significant 12 months in the short history of cyber security, with headline security breaches such as Uber and a scramble to come up with new approaches, particularly as the European Union’s General Data Protection Regulation comes into force next May.
By Greg Sim, CEO, Glasswall Solutions.
2018 will see further developments in this dynamic field that will affect almost every organisation on the planet. Here are some predictions for the next 12 months:
1. Innovation will help overcome the continuing cyber security talent drought
The severe shortage of cyber security professionals will continue to hamper businesses trying to protect themselves. The lack of qualified staff is predicted to rise to 1.8 million in the next five years and we know that two-thirds of companies struggle to recruit staff with sufficient expertise to combat attacks from highly-sophisticated hacking groups.
In the absence of sufficient talent, the immediate imperative for businesses is to adopt more innovative security technology that will give them maximum protection available.
Emails remain the single biggest source of infiltration by criminal malware and a technology such as file-regeneration offers immediate protection without requiring a roster of in-house personnel who are experts in security analysis and investigations.
The good news is that more universities are taking cyber security much more seriously as a subject for study and in the UK the government has announced a £20 million investment in the cyber curriculum for secondary schools.
2. Automation will continue to transform cyber security
It is increasingly recognised that responses to security breaches and other incidents are badly slowed down by manual processes.
As a result it is inevitable that security operations workflows will increasingly be supported within Security Information and Event Management tools and incident response (IR) platforms. We can expect to see hefty resources devoted to IR automation in particular. This will involve, for example, blocking malicious IP addresses, web domains, and URLs, using threat intelligence.
An organisation could orchestrate the workflow associated with a security investigation or patching a software vulnerability, but in 2018 we are more likely to see large organisations automating security analytics and operations, largely because security involves so many mundane tasks, whereas orchestration is complex.
Automation offers immediate gains across cyber security. With emails, for example, advanced solutions can automate the minute examination of every attachment against the manufacturer’s standard so that only a sanitised document, free of malware is admitted to an organisation’s system. Decisions on whether to click open an attachment are no longer left to the harassed employee.
3. The growth of the IoT will necessitate further re-thinking of security
The Internet of Things (IoT) extends the security border of an organisation way beyond its physical boundaries. Consider how many internet-enabled devices are part of an electricity grid.
Smartphones, tablets and the new generation of electronics that users can control externally, such as refrigerators, home security systems and even home heating systems are also part of the IoT and vulnerable to compromise. By 2020 we could be looking at a trillion connected devices in the world.
The successful attack on the San Francisco MUNI transport system in 2016 is a prime example of just how vulnerable an organisation reliant on multiple internet-connected devices can be to hackers demanding a ransom to release encrypted data.
An assault on the core infrastructure of the internet could have massive effect, particularly if it is linked to terrorism. The best defence is to keep malicious code out of an organisation’s network in the first place, rather than relying on outdated anti-virus defences, which as is widely known, can never pick up the kinds of malware criminals are devising every hour of the day.
4. Blockchain will be no cyber security panacea
It is tempting to think that blockchain perfectly complements internal security layers as part of a defence-in-depth approach. Implementations are starting to address blockchain’s data confidentiality and access control challenges by providing ready-made data encryption and authentication and authorisation capabilities.
But blockchain provides little utility in threat-detection or active defence, so organisations throughout 2018 will find they need other more proven and tested forms of technological innovation to protect them from hackers and the millions of different malware variants they are throwing at businesses ever year.
This has to go alongside an overall cyber security programme that includes a governance framework covering roles, processes, accountability measures, performance metrics, and a change in mindset within the entire organisation.
5. State-sponsored hacking will force organisations to update cyber defences
There’s no question that state-sponsored or arms-length hacking groups are on the increase and have abundant resources in terms of time and talent. The finger is now pointing almost non-stop at Russia, China and North Korea, while Iran and Israel have joined the list of states widely suspected of dubious cyber activity. The devastating attacks on the Ukrainian power network last year were a vivid demonstration of the way state-backed hackers have disruption of national infrastructure as a target.
Intense international rivalry and instability in many regions of the world make it inevitable that cyberwarfare attacks will continue in 2018. State-resourced groups will continue to target service-providers as a backdoor to enterprise-level targets, moving sideways inside and between organisations while leaving little or no trace.
Organisations must employ far more advanced technology to protect themselves from the most common method used by the hacking groups – adapted email attachments that hide zero-day attack triggers. Relying on traditional anti-virus techniques in 2018 could be a critical error, given the sophistication and resources available to state-backed hackers.
6. GDPR will wake everyone up to security requirements
Although the rush to achieve GDPR compliance is already underway, many businesses are going to be caught out as they fail to grasp their responsibilities to EU citizens whose personally identifiable data they hold.
Legal challenges about the way data is handled are likely to proliferate, with fines, substantial costs and public exposure inevitable. It is likely, however, that the regulators will not inflict the full rigour of the penalties available where organisations have failed to comply through poor implementation of new processes.
The same may not be true of organisations that are breached by hackers and seen as failing to fulfil the GDPR’s requirement for state-of-the-art technology to be in place. Fines of up to €20 million or four per cent of turnover may be levied if it is felt an example should be made to encourage everyone else to invest in effective security that protects citizens’ data.
The first half of 2018 should be when the laggards finally address their major security loopholes such as continuing reliance on anti-virus solutions.
7. The small print – why innovation will trump cyber insurance in 2018
The cyber insurance market will continue to grow from a low base, but more businesses are also likely to realise that pay-outs can never cover the entirety of their losses if they are hacked. In the course of the year it will become apparent to many organisations, including SMEs, that investing in advanced security technology is a much better investment.
They will be targeted by hackers using emails just like everyone else and need innovative solutions to protect them. Relying on traditional perimeter security and cyber insurance will nowhere near protect an organisation.
Not only will substantial fines and legal costs be inflicted, the victim organisation will have to compensate individuals affected and then spend substantial amounts of time and money on rebuilding its reputation. Enterprises will see how cyber insurance will never mitigate all the damage of a successful cyber-attack.
Where once it was all cloud, cloud, cloud – now colocation is the topic. Matthew Taylor, Infrastructure Consultant, Altodigital discusses further…
We’ve heard so much about the cloud, but not so much about an alternative; colocation or “co-lo” - yet 2016 was recorded as the best year for colocation so far, with 2017 shaping up to exceed expectations. Despite fears about Brexit, data centre capacity in London is still in demand. In fact, according to a report by CBRE, the global real estate and investment business, “London was solely responsible for 60% of the total European take-up witnessed in Q1 of 2017."
So exactly what is colocation and why is it so much in demand? To put it simply, colocation combines the advantages of an Opex charging structure with the benefits of using your own hardware. Payment advantages, plus cost benefits, equals huge savings without sacrificing security and uptime.
Colocation allows for the transporting of a company’s own physical hardware to a data centre. As you own the hardware, all you pay are the rack costs. For this, you take advantage of the datacentre’s main switch and firewall – and there’s no need to invest in back-up generators, UPS and HVAC units or pay the ongoing costs associated with this equipment.
Because a data centre’s profitability depends on failsafe resilience, ‘colocators’ can depend on far superior redundancy and backup than they would be able to deploy themselves. For example, if the power were to go down, there will be rooms full of batteries, UPSs and generators on site and there will be 24/7 support, too.
There are further benefits too; as all infrastructure is located ‘off site’ in the data centre, if there is some form of fire, theft, vandalism or other disaster, staff can work from any location.
Also, if a business has a remote working sales or work force, they will be able to operate securely and efficiently from any location, even on a mobile device. The server sat in the data centre will be utilising their connectivity, which will be providing exceptionally high connectivity speeds and also with four or five levels of resilience and failover (they will have BT, Virgin, Satellite, other loops such as TalkTalk) to protect the connection going into the data centre.
We have some customers that choose to invest in our full cloud services, yet a colocation arrangement can cost less than a third of this price.
So if the benefits include significant cost savings and superior reliability, there must be some disadvantages? The main difficulty harks back to the stats at the beginning of this article – with so much demand for data centre space, it can be difficult to locate one close by and proximity is extremely important. A business may not want to rely on the colocation provider for maintenance as this will make the service more expensive.
However, even this drawback has a silver lining. With so much demand and so much being invested in new data centres, the more that are built, the more competitive each one will need to be. This might include cutting costs or offering a range of services to differentiate their business.
The answer is to work through a managed services provider who will have their own network of data centres – preferably all UK based. They will be able to vouch for their speed and security – both physical and virtual, as well as offer their own technical support.
For smaller businesses, it’s clear that colocation offers a chance to take advantage of economies of scale. By sharing rack space they receive big business protection with almost no risk of downtime at, at best, all at a fraction of the cost of a full cloud platform. Certainly worth careful consideration.
According to the NewVantage Partners Big Data Executive Survey 2017, 95 percent of the Fortune 1000 business leaders said that their firms had undertaken a big data project in the last five years - confirming that, for most organisations, big data is big business. When analysed effectively, big data can help organisations identify new opportunities, make smarter business decisions, run more efficient operations, drive higher profits and build a bank of happier - and more loyal - customers.
By Darren Watkins, managing director for VIRTUS Data Centres.
However, while for many, big data strategies have become a normal part of doing business, that doesn't mean it’s easy. The same NewVantage Partners survey said that less than half (48.4 percent) of business leaders felt that their big data initiatives had achieved measurable results. And Gartner believes that organisations are getting stuck at the pilot stage of their big data projects- before they’ve had the chance to see any benefits at all.
So, clearly, organisations are facing some major challenges when it comes to implementing their big data strategies.
But what are those challenges? And perhaps more importantly, what can organisations do to overcome them?
Whilst big data is undoubtedly now a strategic boardroom discussion, the real issues - and the real solutions - still sit with the IT department within an organisation, and more specifically, in the data centre.
In its Digital Universe report, IDC says that the amount of information stored in the world's IT systems is doubling about every two years. And, by 2020, the total amount will be enough to fill a stack of tablets that reaches from the earth to the moon 6.6 times - with enterprises having responsibility or liability for about 85 percent of that information. So, the most obvious challenge associated with big data is simply storing and analysing swathes of information.
The sheer volume of data means intense pressure on the security, servers, storage and network of any organisation - and the impact of these demands is being felt across the entire technological supply chain. IT departments need to deploy more forward-looking capacity management to be able to proactively meet the demands that come with processing, storing and analysing machine generated data.
So it’s perhaps not surprising that on-premise IT is on the decline and colocation facilities are becoming increasingly dominant within the enterprise. High Performance Computing (HPC), once seen as the preserve of only the large corporation, is also now being looked at as a way to meet the challenge - and is requiring data centres to adopt high density innovation strategies in order to maximise productivity and efficiency, increase available power density and the ‘per foot’ computing power of the data centre.
And, of course, cloud computing, offers almost unlimited storage and instantly available and scalable computing resource - offering enterprise users the very real opportunity of renting infrastructure that they could not afford or wish to purchase otherwise.
Of course one size doesn’t fit all. Organisations need to take a flexible approach to storage and processing. Companies must choose the most appropriate partner that meets their pricing and performance level needs – whether on-premise, in the cloud or both - and have the flexibility to scale their storage and processing capabilities as required. They must also make sure they aren’t paying for more than they need and look for a disruptive commercial model, which gives absolute flexibility - from a rack to a suite, for a day to a decade.
It’s maybe obvious that the more data that is stored, the more vital it is to ensure its security. The big data revolution has moved at considerable speed, and while security catches up organisations are potentially more vulnerable.
This is another area where colocation wins out - as moving into a shared environment means that IT can more easily expand and grow, without compromising security or performance. Indeed, by choosing colocation, companies are effectively renting a small slice of the best uninterruptible power and grid supply, with backup generators, super-efficient cooling, 24/7 security and dual path multi-fibre connectivity that money can buy - all for a fraction of the cost of buying and implementing them themselves.
It should come as no surprise to hear that big data is here to stay. IDC forecasts the market will increase to approximately $32 billion this year from just over $3 billion five years ago.
And the demands that come with big data mean that, ultimately, the data centre now sits firmly at the heart of the business. Apart from being able to store machine generated data, the ability to access and interpret it as meaningful actionable information, very quickly, is vitally important - and therefore a robust and sustainable IT strategy has the potential to give companies huge competitive advantage.
So, whilst organisations which are already collating and storing large sets of data, we know that intelligence is only power if it’s used. The IT industry has a vital role to play in helping organisation to realise these ambitions.
If any more evidence were needed that a fire has been lit under digital transformation, then the prediction that the enterprise mobility market will to explode to $500bn by 2020 is it. Fuelling this blaze is customer and employee demand for a seamless mobile digital experience in all aspects of their home and working lives.
By Nick Pike, VP UK and Ireland, OutSystems.
For businesses, it’s a case of adapt or be left in the dust, as they face challenges from disruptive industry entrants and innovators. IDC predicts that, by 2020, half of the Global 2000 enterprises will see the lion’s share of their businesses depending on their ability to create digitally-enhanced products, services and experiences. The deadline is short and the opportunities are huge, so why are some corporate organisations stuck at the starting gate?
Digital transformation means just what it says: fundamentally changing the way enterprises do business from top to bottom. That’s a daunting prospect for many executive teams in large corporations, despite the promised rewards. While workers at the coalface of the business may be crying out for streamlined mobile business processes and apps that will make them more efficient, the drive for large scale strategic change has to come from the top. Human and financial resources need to be allocated and the whole business lined up in support of the process so that digital transformation is viewed as a strategic investment in the future competitiveness of the company, rather than an expense.
Fear can also arise from concerns that already overstretched IT departments will struggle to cope with the new demands of application development and rollout. In fact, Gartner fed this particular fear in 2015, when predicting that by the end of 2017 the demand for enterprise-grade mobile apps would have grown at least five times faster than the ability of internal IT departments to deliver them. However, in the two years since that prediction was made, the rise in rapid application development platforms has reduced the burden on IT departments and shortened the time to launch. So, this particular fear can now be faced with a degree of confidence.
Large companies with employees that are used to a slower pace of life can find it hard to adapt to the speed of digital transformation. They can struggle to align vital employee education programmes with the rollout timeframe that can be achieved. It’s no good having a fantastically efficient new system if users are still hankering after the legacy technology – warts and all – and struggling to embrace their new environment. Therefore, user education is a critical part of the transformation process.
OutSystems customer Aravind Kumar encountered this challenge when migrating his consulting company from a collection of 50 IBM Lotus Notes applications to a suite of new applications that were developed using the OutSystems low-code platform. He told us: “Getting people to shift their thinking was one of the greatest hurdles we had to overcome. In fact, even as we were building new applications, people were saying we should try to recreate them just as they were in Lotus Notes!”
Fortunately Aravind was able to bring his users with him on a journey to discover the efficiency and accessibility of the new applications his team had developed. The key point is that, to be successful in digital transformation, businesses need to invest in the human factor as well as in the technological expertise in order to realise the full benefits and mitigate resistance.
Every large enterprise has a past. Unlocking information and freeing business processes from legacy IT systems can be one of the biggest stumbling blocks when it comes to digital transformation. However, it is important to recognise that, as I’ve said before, legacy systems exist because they work - they just don’t have the agility that the digital world requires. Establishing when legacy systems should be retired and when they should be incorporated into mobile business processes is a key challenge and evidence suggests that enterprises are mixing it up. A recent report by VDC research found that 53% of large organisations (organisations with >1000 employees) said that the most common development projects they worked on involved building net-new applications from the ground up; however 43% stated that they were modernising legacy applications.
The elegant solution is to find a way to leverage the legacy systems of the past without letting them crush the ambition and potential of the future. An advantage of the OutSystems low-code platform is its ability to integrate with legacy systems, even if they are unique to the customer, meaning that prior investment is not wasted.
As IDC neatly put it “Digital transformation starts with mobility. Organisations with untethered business processes and ubiquitously accessible IT resources will be better positioned to compete and thrive in the digital economy.” This is why organisations need to address the challenges of fear, user resistance and integrating legacy systems to get out of the starting gate onto the competitive racetrack of digital transformation.
Graham Hogg, founder of Connectworxs and author of new book Seeing Around Corners: How culture will unlock the potential of big data, offers some thoughts on how big data can be used to make improvements to everyday working life - helping teams ask better questions, and becoming a key driver of competitive advantage.
Profit isn’t a purpose, it’s a result. To have purpose means the things we do are of real value to others
Simon Sinek, Start With WhyPurpose is where everything begins in an organization and connecting this to data discovery will be where competitive advantage will be found for organizations in the age of Big Data and advanced analytics.
This is not a plan, or a strategic goal, but the overreaching higher ideal that guides efforts and choices made by the teams within it. It defines why the organization exists in the world and stays true to its mission. It does not change over time, irrespective of turbulence or hardship. It informs every team’s ambitions, wherever they are or with whatever they are trying to achieve – human resources, operations, sales or strategy, and across geographic boundaries.
As organisations seek to gain value from their data, they must inspire their teams to start with their significant others, not data. These successful organizations build an externally oriented purpose that informs the actions of their teams every day. This purpose is critical in the creation of a data-driven organization as it becomes a handrail for data-driven discovery.
However an organization chooses to define its purpose, there should be a fundamental attempt to make a difference in the lives of other people or organizations. Every team in every department in every project and meeting must be focused on others – on the difference that can be made in their lives.
In his TED talk, Adam Leipzig, CEO of Entertainment Media Partners, suggests a way of delineating this purpose. He asks the question,
“How do you find your life’s purpose in five minutes?”
Leipzig tells a story about going to a 20th-year class reunion with his Yale college friends. He was surprised to find that the majority of them were unhappy. Sure, they had great jobs and great houses and fine lifestyles, but 80% of the group was unhappy nonetheless, as they had no real purpose in life.
Having thought about this, he came up with these five questions that can help to identify the purpose in one’s life:
Leipzig notes that the majority of questions, the final three, are about other people, outward facing. Only two of the five are about self-reflecting. He goes on to describe how the happiest and most successful people are those who serve others and know exactly whom they serve, what these people need, and how these people can be changed for the better as a result of their work, products or services.
Those that take this perspective have a long record of success. For example, the
Kellogg Company’s mission is to “nourish families so they can flourish and thrive.”
Notice here the external orientation, that is, thinking about others – people, families. This isn’t just sentiment. This is hard-headed, commercial thinking.
Consider the example of the purpose of Intercontinental Hotels: “Great hotels that guests love.”
In establishing its purpose, Intercontinental Hotels talk about “guests,” not customers. The reasoning was simple. When someone you don’t know knocks at the door, you are cautious, careful and not necessarily welcoming. But if a guest knocks at the door, you open it wide, invite them in, talk and offer drinks.
By treating their customers as guests, Intercontinental Hotels changed the way the entire organization behaved. “Customers” are people you do business with, and so you think in terms of how to make money from them. But guests are people you care about.
Employees started thinking about how to make guests comfortable, how to receive them thoughtfully, and what amenities would make guests happy; their thoughts and actions were all targeted towards their guests. It’s a simple but powerful example of how important language can be in formulating an outward-facing orientation – through your purpose.
In an age when we all have lots and lots of data available to us, it's going to be the quality of questions that we ask to data that will unlock value.
What do they like to read? What sort of music do they enjoy as they walk through the hotel lobby? What type of food do they enjoy? How important is hot, fresh coffee to them as they start their day?
These questions need to come from teams every day - and the decisions we make around them need to be based on data not gut or instinct.
In a similar example, Howard Schultz returned to Starbucks as CEO in 2008 to focus the company around its original purpose: “Great coffee for customers and a warm space for networking.” What followed was the closure of more than 7,000 stores to re-engage employees to the Starbucks’ experience. At the end of 2009, with a clear and simple purpose, Starbucks had tripled its earnings.
So, it becomes clear how organizations are informed by their purpose to drive curiosity about how to improve performance. This is where we start with data.
How do organisations integrate advanced analytics into teams across the organisations effectively?
All data leaders should be seeking clarity on what the business team is trying to achieve and where data can add the most value. Providing clarity as a strategic objective is of critical importance to advance analytical skills and to build data-driven teams. Nowadays, analytics are no longer only a concern for the information technology function or other technical functions. Analytics are now a part of new types of teams and the behaviour of the leaders of these teams supports its interaction. Data analysts must be comfortable with asking “stupid” questions around the domain context – irrespective of the number of PhDs!
Business teams possess the skills and experience to know where value creation is needed – after all, they know their products, customers and markets better than anyone. The key behaviour that business teams need today is to take what are often well-thought-out plans and set big data on the right path to explore these further.
As teams and organisations face unprecedented complexity, clarity around who we serve and how to connect this to Big Data will be the winning capability in teams.
Business leaders must ensure that the data talent they adopt into their teams have a full understanding of the business context – what matters most to the team and why. They must create an environment where this talent feels safe to challenge assumptions and check bias and groupthink. Only then will teams unlock the true potential of this resource.
As such, everyday interactions and meeting culture are so important to get right. I served as a Royal Marines Intelligence Officer for 8 years and when I went into business I was struck by how bad meetings were. This isn’t about selecting a goal and formulating an agenda, but staging “messy conversations,” what I talk about in my book Seeing Around Corners as “messy teams.”
Here, leaders act as the facilitator for discussion where Key Assumptions Check, Analysis of competing Hypotheses and Devil's Advocacy are techniques that are common place in teams so they can quickly identify what are their gaps in understanding - here is where real value for data lies!
Leaders must commit to placing understanding at the centre of everything they do and the questions they ask. Questioning data, and discussions and planning with data skills should all have the purpose of enhancing a team’s understanding of what is important.
This is all linked to “purpose.”
Although we often hear that “knowledge is power,” it is important to note that in today’s world, almost all organizations have access to vast quantities of information. So, the defining behaviour of an organization’s teams is how they combine this with analysis and judgment to derive the right insight and then make the right decisions.
In the age of complexity and information overload, discovery-driven leaders operate in the top right-hand box of the quadrant shown here:
Get this right in teams everyday and we will unlock Big Data and understand what our clients, customers, patients are going to want next.
To See Around Corners.
In today’s world, connectivity is everywhere. With the rapid expansion in internet and computer access, and the explosion of mobile, network infrastructures are under more pressure than ever. As a result, service providers are being forced to constantly enhance their networks to keep pace with persistent growth in capacity demands. To do so, many regularly go out to market seeking new advancements in networking technology and associated next generation management tools, and then assess whether they can be applied to the benefit of their networks.
By Faisal H. Usmani, Business Development and Strategy Lead - Communications, Cyient Europe.
At a basic level, service providers can opt to meet elevated capacity demands through the addition of cabling and active equipment. However, consumers are increasingly mobile, as demands for ‘always-on’ connectivity and greater speeds continue to grow. The number of connected services in the home is also growing; by putting mobile at the heart of the connected home, individuals can now be linked to the likes of utility providers, energy giants, insurance companies and goods manufacturers through their network, which provides access to better services, such as optimised lighting and heating. While this dramatically overhauls the consumer experience in the home, it places huge pressure on the reliability of the infrastructure behind the services.
Service providers face a dilemma: either add capacity to all impacted nodes to meet anticipated network demand, or “move” bandwidth around the network. The former pushes service providers to create ‘over capacity’, where capacity at each node is more than necessary, resulting in surplus capacity across the network. The latter on the other hand is a complex process, which demands comprehensive understanding of a network’s recent network capacity trends.
Additional issues can also arise in network-intensive businesses and more mature domestic consumers, both of whom are moving their data and IT services into the cloud. This requires service providers to manage data more efficiently to ensure their networking priorities meet agreed business service levels agreements (SLAs), and they provide high customer satisfaction to their demanding user community.
One mooted solution, the ‘self-optimisation’ of networks, enables the management of network capacity but tends to be reactive rather than proactive. It can create an environment within a complex network where capacity is in a constant state of constant flux, as it attempts to manage real-time demands by interacting with other active network equipment.
By recognising the shifting demands placed on network capacity by timeframe and location, service providers can dramatically improve their ability to efficiently manage capacity, expanding it to match actual demands, rather than more generic capacity needs. But this requires a detailed routing framework, which is where a potentially revolutionary methodology – that of Software Defined Networking (SDN). In it, the network can be dynamically and automatically configured to respond to changing conditions and demands across the network.
There have already been numerous successful deployments of SDN, such as in data centres and for certain aspects of service providers’ business routing. It revolves around the concept of a routing table where specific services can be routed based on pre-defined parameters. For instance, an organisation’s cloud-based Customer Relationship Management (CRM) service could be routed over a faster network route during normal office hours (a period of higher demand), and then revert to a standard network after this time.
This benefits the organisation because they know their mission-critical systems can operate at high speeds at peak times, and benefits the service provider as they’re able to offer a premium service at a higher cost. In this situation, the service provider presents one service with two or more possible routing table entries, and the option to switch the entries between one another based on the relevant time parameter (i.e. between peak and off-peak).
In the initial stages, both situations are easy to manage, as there is a limited and readily controlled set of network parameters. The whole concept starts to become more complex however when we switch routes for a number of services within a larger and more complex network topology.
Many service providers now use network analytics to model and accurately contrast the capacity of their network with the amount used at node level within specific timeframes. The resultant analytics models are used as the framework for any SDN implementation, because they provide service providers and app developers with the insight and ability to develop alternative routing models which match ever-changing network demands. In future, there is undoubtedly a role for the integration of real-time analytics and SDN, because it allows the majority of network optimisation to be performed in real-time. Of course, this would require a pre-determined set of boundaries for such optimisation to prevent over-compensation – for example, in the event of an outage or a network burst.
The introduction of SDN into service providers’ networks provides them with un-heard of levels of flexible capacity, provided they model network capacity and develop appropriate SDN scenarios to ensure the ongoing integrity of capacity.
Not only does it allow service providers to refine their networks, but it also facilitates the implementation of network scenarios in anticipation of both scheduled events (such as major sporting or entertainment events) and unscheduled events, and put in place specific network configurations to respond to any such situations.
A further benefit of SDN is its ability to reduce network complexity by simplifying network management. This comes from abstracting the complex array of intelligence that resides in the network and consolidating it in the form of an SDN controller – one centralised application that acts as the strategic control point for managing control of the switches and routers in a network. From there, it is then possible to directly control multiple nodes from one source, which in turn makes it easier to route traffic around the network, thanks to a reduction in the number of elements in a network. This means more embedded intelligence can be taken out of the network, analysed, and applied to facilitate more efficient use of capacity on a broader scale.
In any Internet of Things (IoT) ecosystem where several different devices and services are incorporated into a specific network, it’s critical that service providers can manage connectivity dynamically. SDN helps facilitate this by making the introduction of different components (and their services) into the network simple through a series of controllers. This enables providers to quickly virtualise and set up a service at any point in the network as an IoT service, rather than setting it up in the traditional isolated manner, meaning that is quickly integrated with overall network operations.
Moreover, as IoT services evolve, demands for connectivity will continue to grow, largely thanks to an increase in the number of connections to devices and machines that support low data volumes. SDN acts as an enabler behind the scenes, allowing communications service providers and enterprises to simplify connectivity and dynamically allocate resources to manage this trend. Ultimately this helps increase the effectiveness of M2M/IoT services within the core network.
Overall, SDN is dramatically changing the way we manage our networks, especially as they become more complex, and SDN deployments will continue to evolve in accordance with the development of their underlying networks. Moving forward it will be used in ways we can’t even imagine yet, as the tools mature and organisations, especially those in the cloud, adapt applications to take advantage of SDN and the opportunities it brings.
As cyber criminals become more sophisticated and attacks become more persistent, companies across the UK are seeking Cyber Essentials Plus certification to prove they are proficient at dealing with potential threats.
By Quiss Technology Commercial Services Manager, Matt Rhodes.
In the past 12 months alone, 875,000 small and medium-sized businesses have been targeted by cyber-criminals, costing a fifth of affected organisations over £10,000 in damages.
With high-profile cyber-attacks often making headline news, clients are adopting stricter vetting processes, which means businesses are having to prove they have strong security controls in place.
During the procurement process, clients are looking closely at security records and making sure companies have a good track-record of protecting sensitive information and data.
Because of this, businesses of all sizes are seeking to achieve Cyber Essentials Plus compliance, as the badge demonstrates a commitment to cyber security, which can help alleviate the fears or reservations of your clients.
There are currently two different certifications available to businesses - the standard Cyber Essentials and the Cyber Essentials Plus.
Cyber Essentials represents the most basic level of cyber security and requires organisations to complete a short questionnaire regarding their current security controls, before being sent to a recognised body to be reviewed.
The organisation will typically undergo an external vulnerability assessment from a certifying body, which directly tests that individual controls on the internet facing network perimeter have been implemented correctly.
This basic level of certification only offers a snapshot of the organisation at that time – it does not provide assurance that systems are effectively configured to defend against more sophisticated or persistent attacks that could potentially strike.
Cyber Essentials Plus, however, requires an organisation to undergo a much more thorough assessment, which is based on internal security assessments of end-user devices.
Using a range of specialist tools and techniques, the Cyber Essentials Plus assessment directly tests to make sure individual controls have been implemented correctly, and recreates various attack scenarios to determine whether a system is proficient in dealing with potential threats.
The Cyber Essentials Plus certification requires your organisation to have five technical controls in place, including;
· Boundary firewalls – these devices are designed to prevent unauthorised access to or from private networks, but require good setup to achieve maximum effectiveness;
· Secure configuration – ensuring systems are configured securely to suit the exact requirements of an organisation;
· Access control – only allowing those with authority to have access to systems;
· Malware protection – ensuring the most up to date virus and malware protection had been installed;
· Patch management – ensuring the latest supported version of applications is used and that all the necessary patches have been applied.
Only once a company successfully passes these tests can they be awarded the badge, which can then be displayed on an organisation’s website to demonstrate to customers that they take cyber security seriously and are able to effectively deal with any incoming attacks.
For serious businesses who are committed to achieving strong cyber security, Cyber Essentials Plus is the only option worth considering.
The Cyber Essentials Plus scheme provides a well-defined standard that is suitable for organisations across all sectors, including charities, schools, universities and local authorities.
While the basic Cyber Essentials certification is a good and necessary starting point for businesses, the extra checks involved with Cyber Essentials Plus make it the best option, especially as GDPR is set to come into effect next year.
These new data protection laws mean that it has never been more important to ensure your sensitive information is properly safeguarded, as any potential breach will naturally attract attention from media and clients alike.
In recent months, major cyber-attacks of varying methods and levels of severity have made headline news, causing serious and sometimes irreparable financial and reputational damage to a business.
It is instances like these that has prompted clients to adopt much stricter checks when selecting a supplier, as they need reassurance and to know they can place their trust in a company who can defend against any potential threats.
Since 2014, Cyber Essentials Plus has been a mandatory requirement when applying for government contracts, and it looks as though we are transitioning to a point where businesses must hold a badge to be considered for most public-sector work.
Cyber Essentials Plus offers procuring organisations greater levels of assurance that required controls and checks are in place.
Even if achieving Cyber Essentials Plus is not a mandatory requirement to win a specific contract, holding the badge can boost your chances and will give you a competitive edge over your competitors.
If your company is serious about achieving Cyber Essential Plus status, then the first step is to visit the official www.cyberaware.gov.uk website, and select one of the official accreditation bodies listed.
In order to successfully hold a Cyber Essentials Plus badge, you must have first completed the basic Cyber Essentials certification process.
Once an independent assessor has reviewed your answers and performed the basic tests on your security controls, you will be awarded the Cyber Essentials certificate, allowing you to proceed to Cyber Essentials Plus.
Once you have received Cyber Essentials certification, you will then need to start the compliance process by introducing the appropriate controls to your system.
When looking for support to help you achieve Cyber Essentials Plus, it is important you contact an IT specialist with plenty of experience in helping clients to achieve compliance – they will arrange for your security controls to be thoroughly tested, which will then determine your effectiveness in defending against potential cyber threats.
Remember, different suppliers will offer varying levels of service and support, so make sure you select one that meets your company’s requirements.
Achieving Cyber Essentials and Cyber Essentials Plus certification is an important first step in your ongoing quest to optimising the cyber security of your business.
Not only will the Cyber Essentials Plus certification process ensure your security controls are effective in dealing with any incoming attacks, but earning the badge will also help assure your clients that their personal data and information is safe in your hands.
Clients have seen how highly publicised cyber-attacks have cost other businesses time and money to correct, and are now adopting stricter vetting processes to ensure the same does not happen to them.
The perceived business and security benefits of achieving Cyber Essentials Plus compliance is undoubtable, however it should only be the first step of your company’s drive to tighten security controls.
Adopting wider security frameworks and being more proactive in your efforts to improve security should be an ongoing responsibility for all of your team.
More sophisticated assessments are available to companies who are looking to push their security further than the Cyber Essentials scheme, including Penetration Testing and Simulated Targeted Attack and Response, which assesses specialist business functions with a market or country influence.
If you think your organisation could benefit from these additional levels of assessments, then contact an IT specialist and achieve total security for your business and clients.
How VSA by Kaseya streamlines processes for leading outsourced IT supplier, Pisys.net.
Pisys.net was established in May 2003 by company directors, Steve Bain and John Merrick, with the vision of providing affordable IT support and services to small and mid-size enterprises (SMEs) in South Wales from the organisation’s head office in Swansea.
That vision has led to Pisys.net establishing itself as a leading outsourced IT supplier, with capability to offer comprehensive IT support throughout the UK, spanning public, private and third sectors, and working with a variety of businesses from start-ups to multi-site operations. Driven by its ongoing success, Pisys.net decided to establish a franchise model and now has five locations set up across the UK.
As Pisys.net has grown over the past decade and a half, it’s become increasingly important that it have the right IT management and monitoring solutions in place to support its day-to-day work of business IT support. In the years since launch, Pisys.net had made extensive use of a wide range of such systems but it had never been entirely happy with any of them. This prompted the company’s decision to evaluate and ultimately purchase Kaseya’s VSA endpoint monitoring and management solution.
According to Ben Burns, head of systems engineering & advanced support engineer: “Once we had a proper demo of VSA, we could see it was going to be so much more beneficial to our operations. It was a bit of a no-brainer to change, really. The other systems we used were either overly complicated, far too time consuming to use, and required an awful lot of training; or alternatively, they were relatively simple and did not do everything we wanted. VSA by Kaseya really hits the sweet spot.”
Steve Bain endorses this view: “VSA has been the first product that I can actually say is really making a difference. It’s the first product that my technical team has told me works consistently well and always performs at a high standard.”
With the decision to implement VSA now taken, Pisys.net set about the implementation process. It carried out everything over a one-month period, including one week spent on user training. During that period, the Pisys.net team also created a new infrastructure, incorporating clients and machines, backdated all the installation executables, and worked together with Kaseya to script the uninstallation of the previous system. A total of 2,750 endpoints are now delivered through the VSA system.
Pisys.net was clear also in what it wanted to achieve from using VSA. From the technical perspective, the ability to patch automatically without having to worry unduly about the process was a major benefit. The IT service provider’s team also looked forward to making use of machine policies, agent procedures and automatic scripting.
At a higher business level, though, the objective was more around finding a way to manage networks as simply and as seamlessly as possible. “You want to set it and forget it, really,” says Bain, “and with VSA, we successfully managed to do that.”
Today, Pisys.net provides a wide range of managed services through VSA focused around IT support, where patching is the main task carried out. With VSA, Pisys.net is able to adopt what is primarily a hands-off approach, leaving the application to run continuously. If a problem is identified on a particular machine and the patch is not updating, VSA enables scripts that are themselves capable of fixing the patching process. As a result of this self-healing process, the Pisys.net support team no longer has to spend significant amounts of time on each individual machine, fixing patches when they fail.
The company also makes extensive use of the agent procedure capability within VSA. “This function has been hugely beneficial to us,” says Burns. “With one client, I recall a problem where every machine affected was taking several hours to screen for updates – and the process would typically fail when individual users shut their machines down before the end of the day.
“Moreover, while the scans were running, they were using up loads of resources, slowing the machines down and negatively impacting user productivity. With VSA’s help, we were able to write an agent procedure that would manually install the requisite updates without the user being interrupted – and, as soon as the machine rebooted after that, the problem went away. That enabled us to fix 1,200to 1,300 machines without the users being aware of what was going on. Without having access to the agent procedure capability, we would not have been able to do that, and that would have meant a vast amount of work on thousands of separate machines.”
Another major benefit of VSA, as implemented by Pisys.net, is the alerting feature which sends alerts directly into the help desk.
For Pisys.net, one of the greatest benefits of the VSA implementation is that it relieves the pressure on the engineering and support team. According to Bain: “It takes some of the workload off our engineers which makes our support more responsive. As we can get support times down, we end up with happier customers whose issues are resolved more quickly and efficiently.”
Using VSA also enables Pisys.net to deliver greater peace of mind to its client base. The policies and agent procedures embedded within VSA further help to drive the agility of the support function. “Being able to quietly push out fixes to hundreds and thousands of machines, with nobody noticing at the time and only becoming aware later on when the service performance improves is very satisfying,” says Burns. “Remote support provided through the help desk is lightning fast, too. You simply click on a machine and you are on it immediately, and over the course of a single day alone that makes a huge difference to overall efficiency and agility levels.”
Burns also highlights that: “In addition, we make use of the reporting functionality within the solution to provide clients with monthly network reports, which give them greater insight into overall performance levels. That in turn gives them enhanced peace of mind both in the performance of their own network and in the service we are providing them.”
“Other systems we have tried seem to do some of what VSA by Kaseya can provide,” adds Bain. “But no other system we have tested brings together everything needed for efficient and effective IT management and monitoring, all within the same solution. That for us is what makes VSA stand out from the pack – and why we are happy with what it has already brought to our business and confident of what it will bring in the future.”
CNet Training recently welcomed Alex Taylor, an anthropology PhD student from the University of Cambridge, onto its Certified Data Centre Management Professional (CDCMP®) education program. Alex recently researched the practices and discourses of data centres. In this article, he outlines his research in more detail and explains how the education program contributed to his ongoing anthropological exploration of the data centre industry.
Traditionally, anthropologists would travel to a faraway land and live among a group of people so as to learn as much about their culture and ways of life as possible. Today, however, we conduct fieldwork with people in our own culture just as much as those from others. As such, I am currently working alongside people from diverse areas of the data centre industry in order to explore how data centre practices and discourses imaginatively intersect with ideas of security, resilience, disaster and the digital future.
Data centres pervade our lives in ways that many of us probably don’t even realise and we rely on them for even the most mundane activities, from supermarket shopping to satellite navigation. These data infrastructures now underpin such an incredible range of activities and utilities across government, business and society that it is important we begin to pay attention to them.
I have therefore spent this year navigating the linguistic and mechanical wilderness of the data centre industry: its canyons of server cabinet formations, its empty wastelands of white space, its multi-coloured rivers of cables, its valleys of conferences, expos and trade shows, its forests filled with the sound of acronyms and its skies full of twinkling server lights.
While data centres may at first appear without cultural value, just nondescript buildings full of pipes, server cabinets and cooling systems, these buildings are in fact the tips of a vast sociocultural iceberg-of-ways that we are imagining and configuring both the present and the future. Beneath their surface, data centres say something important about how we perceive ourselves as a culture at this moment in time and what we think it means to be a ‘digital’ society. Working with data centres, cloud computing companies and industry education specialists such as CNet Training, I am thus approaching data centres as socially expressive artefacts through which cultural consciousness (and unconsciousness) is articulated and communicated.
CNet Training recently provided me with something of a backstage pass to the cloud when they allowed me to audit their CDCMP® data centre program. ‘The cloud’, as it is commonly known, is a very misleading metaphor. Its connotations of ethereality and immateriality obscure the physical reality of this infrastructure and seemingly suggest that your data is some sort of evaporation in a weird internet water cycle. The little existing academic research on data centres typically argues that the industry strives for invisibility and uses the cloud metaphor to further obscure the political reality of data storage. My ethnographic experience so far, however, seems to suggest quite the opposite; that the industry is somewhat stuck behind the marketable but misleading cloud metaphor that really only serves to confuse customers.
Consequently, it seems that a big part of many data centres’ marketing strategies is to raise awareness that the cloud is material by rendering data centres more visible. We are thus finding ourselves increasingly inundated with high-res images of data centres displaying how stable and secure they are. Data centres have in fact become something like technophilic spectacles, with websites and e-magazines constantly showcasing flashy images of these technologically-endowed spaces. The growing popularity of data centre photography – a seemingly emerging genre of photography concerned with photographing the furniture of data centres in ways that make it look exhilarating – fuels the fervour and demand for images of techno-spatial excess. Photos of science fictional datacentrescapes now saturate the industry and the internet, from Kubrickian stills of sterile, spaceship-like interiors full of reflective aisles of alienware server cabinets to titillating glamour shots of pre-action mist systems and, of course, the occasional suggestive close-up of a CRAC unit. One image in particular recurs in data centre advertising campaigns and has quickly become what people imagine when they think of a data centre: the image of an empty aisle flanked by futuristic-looking server cabinets bathed in the blue light of coruscating LEDs.
With increased visibility comes public awareness of the physical machinery that powers the cloud mirage. This new-found physicality brings with it the associations of decay, entropy and, most importantly, vulnerability that are endemic to all things physical. As counterintuitive as it may seem, vulnerability is what data centres need so that they may then sell themselves as the safest, most secure and resilient choice for clients.
The combination of the confusing cloud metaphor with the almost impenetrable, acronym-heavy jargon and the generally inward-looking orientation of the data centre sector effectively blackboxes data centres and cloud computing from industry outsiders. This means that the industry has ended up a very middle-aged-male-dominated industry with a severe lack of young people, despite the fact that it’s one of the fastest growing, most high-tech industries in the UK and expected to continue to sustain extraordinary growth rates as internet usage booms with the proliferation of Internet-of-Things technologies. This also makes data centres ripe territory for conspiracy theories and media interest, which is another reason why they increasingly render themselves hyper-visible through highly publicised marketing campaigns. You often get the feeling, however, that these visual odes to transparency are in actual fact deployed to obscure something else, like the environmental implications of cloud computing or the fact that your data is stored on some company’s hard drives in a building somewhere you’ll never be able to access.
Furthermore, while cloud computing makes it incredibly easy for businesses to get online and access IT resources that once only larger companies could afford, the less-talked-about inverse effect of this is that the cloud also makes it incredibly difficult for businesses to not use the cloud. Consider, for a moment, the importance of this. In a world of near-compulsory online presence, the widespread availability and accessibility of IT resources makes it more work for businesses to get by without using the cloud. The cloud not only has an incredibly normative presence but comes with a strange kind of (non-weather-related) pressure, a kind of enforced conformity to be online. It wouldn’t be surprising if we begin to see resistance to this, with businesses emerging whose USP is simply that they are not cloud-based or don’t have an online presence.
And the current mass exodus into the cloud has seemingly induced a kind of ‘moral panic’ about our increasing societal dependence upon digital technology and, by extension, the resilience, sustainability and security of digital society and the underlying computer ‘grid’ that supports it. Fear of a potential digital disaster in the cloud-based future is not only reflected by cultural artifacts such as TV shows about global blackouts and books about electromagnetic pulse (EMP), but is also present in a number of practices within the data centre industry, from routine Disaster Recovery plans to the construction of EMP-proof data centres underground for the long-term bunkering of data.
With the help of organisations like CNet Training I am thus studying the social and cultural dynamics of data-based digital ‘civilisation’ by analysing the growing importance of data infrastructures. Qualitative anthropological research is participatory in nature and, as such, relies upon the openness of the people, organisations and industries with whom the research is conducted. Every industry has its own vocabularies, culture, practices, structures and spheres of activity and CNet Training’s CDCMP® program acted as a vital window into the complexity of data centre lore. It provided me with a valuable insider’s way to learn the hardcore terms of data centre speak and also with the opportunity to meet people from all levels of the industry, ultimately equipping me with a detailed, in-depth overview of my field-site. Interdisciplinary and inter-industry sharing of information like this, where technical and academically-orientated perspectives and skills meet, helps not only to bridge fragmented education sectors, but to enable rewarding and enriching learning experiences. I would like to sincerely thank the CNet Training team for assisting my research.
Accessing life-saving information quickly could be vitally important for users of the Meningitis Research Foundation’s website.
Meningitis Research Foundation is a leading UK and international charity that brings together people and expertise to defeat meningitis and septicaemia wherever it exists. Since the charity began in 1989, it has funded vital research into prevention, diagnosis and treatment of meningitis. The charity also raises awareness of the disease and supports individuals and families affected.
Due to the fast-acting nature of meningitis and septicaemia, the public need to be able to access information extremely quickly, and at all times. Where previously print leaflets were used as the primary resource for communicating information about symptoms, the charity has seen a huge demand for digital resources that cater to all audiences.
Meningitis Research Foundation’s website is key to its digital presence. Given the severity of the disease and the importance of early diagnosis, site performance is vital, and instant access is a top priority. Previously, its website was not optimised for mobile - which accounts for 60-70% of user traffic - making information and advice difficult to view. On top of this, the look and design of the website was constrained by technical issues that meant graphics were taking too long to load, leading to poor response times.
The website also occasionally experiences peaks in traffic due to awareness campaigns or news reports about clusters of the disease. During these times, the number of people trying to access the website surges, and the charity need increased IT power and capacity to handle this increase in traffic.
After a recommendation of Hyve Managed Hosting through its design agency partner Delete, Meningitis Research Foundation looked to the web hosting company to provide the infrastructure, performance and security it needed.
Hyve specialise in fully managed IT services for business. Its web hosting services provide reliable, secure and high-speed infrastructure underpinned by the use of SSD drives and a ‘no single point of failure’ architecture design. Its instantly adaptable cloud resources mean that organisations can scale in conjunction with increases to website traffic, and its strong investment in hardware means that the company offers a 99.99999% uptime guarantee.
A team of systems architects and highly trained engineers work with clients to tailor the best possible solution for their needs, with its main objective to build solid, stable relationships with clients – becoming an extension of their business.
Due to the humanitarian nature of the charity’s work, Hyve offered its services to the Meningitis Foundation with a charity discount.
The organisation is now able to take advantage of a faster and more efficient website that provides greater response times than ever before. On average, the website is now experiencing load times of 684ms - far below the median of other sites, at 749ms - allowing visitors to reach the information they need without difficulty.
Site security has also increased, which is a vital asset when dealing with account and payment information connected to online donations.
Rob Dawson, Head of Communications at Meningitis Research Foundation said, “We’re extremely grateful to Hyve for hosting our new, improved website with a fantastic charity discount. This means a lot to the charity. Meningitis and septicaemia are deadly diseases that strike without warning. It’s crucial for people to be able to quickly access our lifesaving information, and we can already see that hosting the website with Hyve is helping with that.”