With more and more emphasis on automation in and around the data centre, and with all of us leading increasingly busy, seemingly 24x7 business lives – where we have to check and respond to emails no matter what time of day or night – it’s becoming increasingly easy for the communication process to be carried out remotely. For many aspects of work and private life, reliance on email and text, does seem to make sense, and can save a great deal of time and money. After all, how easy is it to get to speak to someone on the phone these days? Maybe not impossible, but by the time you’ve finally tracked down the person to whom you need to speak (via the office switchboard, a re-routing to a mobile number, and then leaving a voicemail – perhaps repeating this exercise several times!), an email or text will have long since found them, and a conversation will have ensued and, like as not, been completed.
Automated customer service is something of a grey area in this remote communications debate – readers of my columns will know my own views on just how depressing and ineffectual are most of these phone lines, but in many cases the companies behind them simply do not care, nor does it pay them to care!
So, we are all becoming used to, if not accepting of, the idea of remote communications. However, there’s one area where I strongly believe in the importance of ‘protecting’ good old-fashioned face-to-face communications, and that’s when it comes to learning about new ideas and technologies.
Yes, reading magazines such as DCS, and trawling the web for information, are both valuable ways of gaining knowledge, but there’s still not complete substitute for sitting down with a vendor, or a fellow end user, and discussing something new, asking all the questions you want and, importantly, being able to read the face of the answerer, to help build, or not, your trust in them and their products and services. Of course, you can email folks, and keep asking questions and receiving answers this way, but I’m fairly certain that a short, face-to-face meeting, will give you far more useful information, than a long drawn out email exchange.
All of which brings me to the main point of this column – there are some fabulous opportunities to attend various exhibitions, conferences and the like over the next few weeks. If you are struggling to understand blockchain, Big Data, artificial intelligence and a whole host of other relatively new ideas and technologies heading your way, then leave your (hot) desk behind, and go along to one or more event to talk to real people about real solutions. Not sure that the virtual world will ever be able to replace the value of this face-t0-face contact!
Asset management technologies and tools are available to initiate zombie hunts and eliminate these in the long-term, yet too many organisations aren't doing enough to address them.
Zombie servers – both physical and virtual that use power but do nothing – are collectively costing organisations billions in CAPEX and OPEX. In light of this, The Green Grid suggests that organisations should develop a strong business case to gain the buy-in and resources needed to execute a zombie hunt.
Metrics such as The Green Grid’s Power Usage Effectiveness (PUE) provide an indication of a data centre’s efficiency in terms of infrastructure upstream of the IT equipment, such as cooling, uninterrupted power supplies (UPS), and lighting. While great progress has been made in reducing industry average PUE through education and adoption of best practices, if this more efficient infrastructure is powering and cooling servers that are very inefficient due to low utilisation, all this good work is only addressing a fraction of the problem, leaving large energy and cost inefficiencies untouched. A key culprit that drives down server fleet utilisation is the existence of zombie servers.
However, Roel Castelein, EMEA Marketing Chair for The Green Grid, argues that with the right tools, organisational understanding, methodology, and management processes, ‘zombie hunts’ can be executed quickly and with little risk.
He explains: “Promised costs savings and more productive and secure infrastructure are compelling motivations to slay zombie servers residing throughout data centres. However, this is not a new revelation. Earlier studies have exposed the prevalence of zombie servers not doing any useful work for months and years, and the significant cost associated with them. Yet organisations often complacently assume, usually without evidence, that there aren't many of these servers, and that their infrastructure is the epitome of productivity and efficiency.
“However, today’s more sophisticated asset management and resource optimisation solutions confirm the persistence and pervasiveness of zombies (typically consisting of 20 to 30 per cent of total servers in a data centre). In fact, even virtualisation environments have succumbed, with similar level of use, offsetting the productivity and cost reduction benefits of virtual machines (VMs). Fortunately, asset management technologies and tools have evolved to significantly facilitate not only the ‘zombie hunt’, but also the ongoing control and suppression of zombie populations.
“To suppress zombie servers for good we recommend that organisations implement the following approach. Firstly, identify the key stakeholders who will need to be engaged to provide resources and support. Secondly, review the tools available to facilitate a successful hunt. Thirdly, detail how to prepare a zombie detection plan and deal with them once they have been found. Lastly, lay the groundwork for a compute-asset management programme that will prevent zombies from appearing again.
“Ultimately, a successful hunt that’s coupled with a comprehensive and well-executed management programme will ensure that organisations are able to reap the significant and perpetual benefits of riding their infrastructure from zombies forever,” Roel concludes.
Projected growth for the data centre cooling market illustrates the key role that environmental control will play in the data centre of the future.
Fresh forecasts from Occams Business Research & Consulting (OBRC) predict that the global data centre cooling market will grow at a compound annual growth rate (CAGR) of 14.95 per cent between 2016 and 2023. According to Greg McCulloch, CEO of colocation providers Aegis Data, this prediction illustrates the core role that cooling will play in enabling data centres to support the demanding technologies of the future, and the need for data centres to have efficient and sophisticated cooling systems in place.
Overall global data centre CAGR is estimated at 11 per cent for 2016 to 2020. McCulloch commented on the significance of this growth: “It’s interesting to note that CAGR for data centre cooling is actually higher than that for overall data centre growth. Although to a certain extent this reflects the shorter period being examined for overall growth, it also speaks to the central role that cooling plays in the modern data centre. As data volumes become larger and requirements become more complex, so cooling plays an increasingly important role in enabling the data centre to handle these requirements stably and profitably.”
McCulloch went on to give examples of technologies that will ultimately depend on a data centre’s cooling capabilities: “Artificial Intelligence (AI), the Internet of Things (IoT), Virtual Reality (VR), and Augmented Reality (AR) – what all these technologies have in common is that they are predicted to grow in prominence over the next few years, and that they have complex and demanding data requirements. In order to thrive in this age of exponentially growing data volumes, organisations are increasingly turning to high performance computing (HPC) services. This technology is geared towards solving problems which involve huge data sets, and the market for it is expected to reach $33bn (£26bn) by 2022.
“Where does cooling come into this picture? For a data centre to comfortably accommodate HPC it must be able to accommodate its superior processing power, which often involves concentrating more computing power in higher density racks. This produces far more heat than the standard data centre configuration, meaning that efficiently cooling this space becomes far more important for supporting this technology.”
Huge data centre providers are going to great lengths to minimise their cooling costs. For example, Facebook located a data centre in near-arctic Luleå, northern Sweden, whilst Microsoft is experimenting with an underwater data centre. McCulloch concluded by challenging the notion that maximising cooling efficiency requires huge expenditure: “You don’t need to have the vast resources of a Facebook to access the kind of cooling efficiencies needed to support the ever-expanding data requirements of new technologies. There are various other approaches available, including liquid or conductive cooling.
“We echo the approach of the Nordic data centres with Direct Fresh Air Cooling (DFAC), which draws the ambient air surrounding the data centre into the building, filtering it for contaminants, and mixing it so that it is at the optimal temperature and humidity for operations. This enables us to deliver an ultra-low design Power Usage Effectiveness (PUE) of 1.13 and a contracted PUE of 1.2.”
The EU research and innovation programme, Horizon 2020, has awarded a three year innovation project in Northern Sweden. The aim is to prototype the most energy and cost efficient datacenter in the world.
A pan-European consortium consisting of Hungarian data center engineering specialist H1 Systems, UK based cooling provider EcoCooling, German research institute Fraunhofer IOSB, RISE SICS North and infrastructure developers Boden Business Agency, have joined forces aiming to design and validate a future proof concept.
The demand of datacenter capacity is constantly growing and at the same time the minimization of environmental impact is crucial. The mission of the €3million project is to drive development as far as possible to meet and unify these conditions. The unique solution offers a sustainable datacenter building which is energy and resource efficient throughout its lifecycle, cheaper to build and operate and brings jobs and knowledge to remote places in large parts of Europe.
The 500kW prototype facility will be an experimental lab and demonstration site, as it will be tested by providers and end-users in a real operation environment with all aspects of its operations measured. Project stakeholders will validate feasibility, energy efficiency and usability of the prototype, as well as developing predictive models to aid in future planning and deployment of the concept.
Limited colocation space is available for end-users that are willing to take part in the research experience.
According to the International Data Corporation (IDC) Worldwide Quarterly Cloud IT Infrastructure Tracker, vendor revenue from sales of infrastructure products (server, storage, and Ethernet switch) for cloud IT, including public and private cloud, grew 25.8% year over year in the second quarter of 2017 (2Q17), reaching $12.3 billion.
Public Cloud infrastructure revenue grew 34.1% year over year and now represents 33.5% of total worldwide IT infrastructure spending at $8.7 billion, up from a 27.0% share one year ago. Private Cloud revenue reached $3.7 billion for an annual increase of 9.9%. Total worldwide cloud IT infrastructure revenue has almost tripled in the last four years, while the traditional (non-cloud) IT infrastructure revenue continues to decline and is down 3.8% from a year ago, although it still represents 52.4% of the worldwide share of overall IT revenue at $13.6 billion for the quarter. Public Cloud now represents 70.2% of the total cloud IT infrastructure revenue. The market with the highest growth in the public cloud infrastructure space was Enterprise Storage Systems with revenue up 30.4% compared to the same quarter of the previous year, and making up over a third of the revenue in public cloud. Server and Ethernet Switch public cloud IT infrastructure revenues were up 24.6% and 26.8% respectively. Private cloud infrastructure spending continues to be driven by the server market, which has remained nearly 60% of the revenue in that space for the past 18 quarters.
"The strength in public cloud growth continued at an accelerated pace through the first half of 2017," said Kuba Stolarski, research director for Computing Platforms at IDC. "We have already reported that most of this growth is being driven by Amazon. However, it is important to remember that many of the other hyperscalers – Google, Facebook, Microsoft, Apple, Alibaba, Tencent, and Baidu – are preparing for their own expansions and Skylake/Purley refreshes of their infrastructure. At the same time, IDC is still seeing steady growth in the lower tiers of public cloud, and continued growth in private cloud on a worldwide scale. In combination, these infrastructure growth segments should more than offset the declines in traditional deployments for the remainder of 2017 and well into next year."
Except for Latin America revenue that declined 13.1% from a year ago, all other regions in the world experienced double-digit revenue growth in the Cloud IT Infrastructure space compared to last year. Asia/Pacific (excluding Japan) and Western Europe led growth with rates of 30.5% and 33.4%, respectively. Canada (25.1%), Middle East & Africa (28.4%) and the United States (24.8%) had annual growth in the mid-twenties, while Central and Eastern Europe (16.9%) and Japan (10.4%) growth was below 20% but still double digit.
Top 5 Companies, Worldwide Cloud IT Infrastructure Vendor Revenue, Q2 2017 (Revenues are in Millions, Excludes double counting of storage and servers) | |||||
Vendor Group | 2Q17 Revenue (US$M) | 2Q17 Market Share | 2Q16 Revenue (US$M) | 2Q16 Market Share | 2Q17/2Q16 Revenue Growth |
1. Dell Inc* | $1,456 | 11.8% | $1,534 | 15.7% | -5.0% |
1. HPE/New H3C Group* ** | $1,365 | 11.1% | $1,437 | 14.7% | -5.0% |
3. Cisco | $1,014 | 8.2% | $888 | 9.1% | 14.3% |
4. Huawei* | $380 | 3.1% | $292 | 3.0% | 30.2% |
4. NetApp* | $314 | 2.5% | $254 | 2.6% | 23.6% |
4. Inspur* | $275 | 2.2% | $189 | 1.9% | 45.8% |
ODM Direct | $5,439 | 44.1% | $3,321 | 33.9% | 63.8% |
Others | $2,088 | 16.9% | $1,886 | 19.2% | 10.7% |
Total | $12,332 | 100.0% | $9,800 | 100.0% | 25.8% |
IDC's Quarterly Cloud IT Infrastructure Tracker, Q2 2017 October 2017 |
Research findings are designed to serve as guidance to colocation providers as they make future business decisions.
As the demand for colocation data centres continues to grow, the role of a colocation provider has never been more important, but the threats and challenges they face continue to intensify. To understand the future dynamics of the colocation market Schneider Electric partnered with 451 Research to survey 450 end user decision makers of colocation services in the United States, Australia, Europe and China across multiple geographies. Insights gathered from this survey are designed to serve as guidance to colocation providers as they make future business decisions.
Providers today are dealing with an ever-changing set of buyers, evolving customer demands and a growing list of emerging technology such as the Internet of Things (IoT), next-generation edge computing and cloud computing. Each of these categories pose both an opportunity and a threat to the colocation segment, and the survey results speak to how each is viewed by colocation end users in terms of adoption and importance. With 62 percent of those surveyed saying they have moved IT applications out from colocation data centres and into public cloud within the last two years, colocation providers must look to find ways to entice new and existing customers to consider colocation as a viable option for their businesses.
“Whether they deliver the services themselves or via partners, successful colo providers are increasingly broadening their service offerings,” said Rhonda Ascierto, Research Director, 451 Research. “Our research identified several value-added services that align with colo customers’ changing needs.”
It became clear throughout the research that there are many ways to gain and maintain competitive advantage amongst other hosting options. With 82 percent of respondents saying it was either very or somewhat important that cloud services are hosted in the same data centre as their colocated IT infrastructure, colocation providers can turn what may seem like a threat to the segment into an opportunity. Many additional statistics within the report offer opportunities for providers to set themselves apart from their competitors.
The conclusions of this analysis allow colocation providers to understand their target customer’s capacity and service requirements and make informed, prioritized choices around the latest technologies to provide visibility, transparency, redundancy, flexibility and cost reduction. The most successful colocation operators will ensure all of these customer questions and requirements are addressed to ensure they are in a position to take advantage of the 64 percent of customers who said colocation will play a role in their data centre strategy during the next two to three years.
Driven by rapid cloud growth, India’s colocation and managed hosting market to reach almost $2bn by 2019.
451 Research has published its first study of the growing datacenter market in India. It predicts that the country’s colocation and managed hosting market could reach almost $2bn in revenue by 2019, up from $1.3bn in 2016. The firm also forecasts continued solid growth for the cloud computing-as-a-service market, with a CAGR of 25% over the next four years as digital transformation takes hold in India and more businesses start outsourcing their IT infrastructure.
Comprising IaaS, PaaS and ISaaS, the cloud computing market in India will reach $1.02bn revenue by 2021 according to 451 Research Market Monitor. Hyperscale cloud and IT services providers looking to reach India’s vast potential market of customers are further driving demand for multi-tenant datacenter capacity in the country.
Analysts also reveal that 84% of India’s datacenter supply is concentrated in the country’s five largest markets: Mumbai, Chennai, New Delhi, Bangalore and Pune. Multi-Tenant Datacenter Markets: Mumbai, New Dehli and Bangalore , finds that almost one-third of all built-out footprint in the country is located in Mumbai, due to the critical role the city plays in Asia-Pacific’s financial services industry, its large population and multiple international subsea cable landings.
“Most datacenter investment is focused on these five cities, reflecting India’s booming economic growth in dense urban areas,” said Teddy Miller, Associate Analyst at 451 Research and the report’s author. “Providers in the country must overcome an array of obstacles, though, including lack of infrastructure, socioeconomic inequality, government bureaucracy, and hesitation on the part of local businesses to adopt colocation services.
“Despite these challenges, the opportunity for datacenter companies to thrive in India is enormous. Most homegrown providers already offer a range of managed services, and many have even launched their own public and private cloud platforms. The success of local companies such as Netmagic Solutions, NxtGen, Reliance Communications and Sify Technologies has even resulted in partnerships and acquisitions involving global players including ST Telemedia and NTT Communications, and we expect more on the horizon,” Miller added.
The report finds that providers in Mumbai are adding space to satisfy the requirements of global hyperscalers that are moving their operations closer to the city’s 21 million residents. 451 Research expects this demand from hyperscale cloud services providers to result in another year of solid growth in 2018, even with intensifying competition and rising land and power costs.
The colocation market in New Delhi, India’s capital, has benefited from the government’s e-commerce initiatives. The city had continually maintained a modest growth rate until earlier this year, when a new large-scale datacenter came online in the business-centric Noida area. 451 Research believes providers should keep a watchful eye on demand before making any further local investment.
With Bangalore’s emergence in recent years as a centrally located, relatively disaster-free IT services hub, its datacenter industry has flourished. Given this unique role the city plays in the national economy, 451 Research believes there will be a steady increase in the city’s utilization rates and datacenter builds going forward.
New data from Synergy Research Group shows that the flourishing Chinese market for data center and cloud remains dominated by local Chinese firms.
Chinese companies comprise the leading group of players in each of four key market segments - data center hardware/software, cloud computing services, colocation and CDN. In the services markets Chinese operators in total account for well over 80% of revenues, while in the data center hardware and software side the market is a little more open to competition, but Chinese vendors still account for over half of all revenues. In aggregate, annual revenues for these markets are now running at well over $15 billion per year and are growing at over 16% annually.
As in all other regions, the market for data center hardware and software (servers, storage, networking, security, OS, virtualization software) is still much larger than the market for cloud services (IaaS, PaaS, hosted private cloud services), but the booming cloud market has a much higher growth rate. Meanwhile colocation providers, who house data center facilities for both enterprises and cloud providers, continue to grow strongly, and the CDN market too continues to evolve. In all four of these markets, China is either the second or third ranked country in the world in terms of quarterly revenues, but has much higher growth rates than the other leading countries. The market leaders in cloud and data center services are China Telecom, Alibaba, China Unicom, ChinaNetCenter, 21Vianet and Tencent; the market leaders in data center hardware and software are Huawei, Inspur and Lenovo.
“The difference between China and all other countries is striking,” said John Dinsdale, a Chief Analyst at Synergy Research Group. “The markets for cloud services and for data center infrastructure are truly global in nature and in all regions they are dominated by US-headquartered companies, but China stands out as the one huge exception. Going forward it is difficult to see US companies making too much headway in China, but there is no doubt that some of the Chinese companies will have an increasing impact in countries beyond China.”
DC facilities looking to tighten their belts should explore liquid cooling.
With power consumption at an all-time high, organisations need to move past their ‘data centre hydrophobia’ and embrace innovations in liquid cooling to usher in greater efficiencies within the DC environment. According to the Green Grid, doing so can reduce energy usage, improve IT equipment (ITE) performance, and decrease total cost of ownership (TCO).Today there are 4.9 billion connected devices globally, according to Gartner, and Cisco has forecast that this figure will increase to 50 billion by 2020. This rising connectivity creates more data that needs to be stored, meaning that one of the greatest future challenges within the data centre industry will be improving energy efficiency and reducing operating costs.
As data centre facilities can do little to influence the rising rate of data consumption, Roel Castelein, EMEA Marketing Chair at the Green Grid, suggests organisations should embrace liquid cooling to improve overall efficiency.
He explains: “There’s no escaping it, data volumes are growing and doing so at an exponential rate. While data centres have successfully adapted to support growing data exchanges - by offering scalable power and cooling systems to allow servers to consistently perform and prevent downtime - efficiency remains a key choke point. Fortunately, many new liquid cooling technical developments and products have entered the market, with the potential to offer facilities significant efficiency gains.
“These new developments transcend the power and performance limitations inherent in air-cooled IT hardware, while reducing IT and cooling energy. In practice, they range from simply retrofitting existing major server brands with liquid-cooled heat sinks to implementing complete rack-based systems with integrated coolant distribution units (CDUs). These systems can interface with existing facility system cooling loops, or they can be independent and self-supporting by means of supplemental external heat rejection systems, such as standard fluid coolers.
“There are many reasons for both IT and facilities to explore leveraging the benefits of liquid cooling, not least improved IT performance and a reduction in IT and facility energy usage and space, which results in the overall reduction of total cost of ownership (TCO). And, with liquid cooling becoming more commercially viable, it won’t be long until liquid cooling is one of the most cost-effective options to cool ITE in data centres,” Roel concluded.
Data has become one of the most valuable assets for 21st century businesses. Organizations are in a constant state of pressure to manage a massive amount of data in their supervision. As a result, managing the health of data centers is paramount to ensure flexibility, safety and efficiency of a data driven organization.
A continually developing and changing entity, today's complex data center requires regular health checks empowering data center managers to stay on the pulse of their data center facilities in order to maintain business continuity. A preventative versus reactive approach within the data center is paramount to avoiding outages and mitigating downtime. Data center managers can maintain the health of data center hardware by leveraging automated tools that conduct ongoing monitoring, analytics, diagnostics and remediation functions. With the average data center outage costing even the most sophisticated organizations upwards of three-quarters of a million dollars, implementing a data center health management strategy is mission critical in today's dynamic business environment.
A recent study carried out by Morar Consulting, on behalf of Intel and Siemens, amongst 200 data center managers in the UK and US reveals that nearly 1 in 10 businesses do not have a data center health management system in place, showing that many businesses are potentially exposed to outages costing businesses thousands of dollars per minute in downtime. This report summarizes the findings from a study carried out in Spring 2017, highlighting today's approaches and attitudes towards the implementation of a data center health management strategy.
Other key findings:
For businesses that do have health management systems in place, a third implemented these only once their backs were up against the wall, being forced into implementation following an outage, witnessing an outage elsewhere or being pressured by the C-suite to do so.
In an era of automation, 1 in 5 data center managers are still relying on manual processes to perform jobs that could be minimized or automated through innovative software solutions.
At opposite ends of the healthcare ecosystem, data is being harnessed to drive a revolution.
By Junaid Hussain, Product Marketing, UKCloud.
1) Data standards and interoperability are enabling the patient to become the customer
There are currently a number of technological imbalances in the sector that are being corrected as the industry is turned on its head:
Fundamental to all of this is are data standards and interoperability, enabling a host of new devices and apps to work together to generate a wealth of new and enriched data. This rich data then enables and inspires a further wave of specialist solutions that can deliver new insights, reduce costs and improve outcomes.
2) Powerful secure platforms for pooling valuable datasets are providing clinicians, researchers and specialists solution providers with unprecedented capabilities
At the same time, there were limiting factors that might once have restricted what was possible with data in healthcare and these are being overcome:
While there are a host of technologies in play here, cloud is the central enabler for them all. The IoT (Internet of Things) devices that are gathering data like never before, are feeding it all into the cloud. It is in the cloud that the data is then securely stored and processed in order to mine it for insights and turn it into intelligence. It is in the cloud that collaboration between a vibrant ecosystem of specialist solution providers can then amplify and enrich this intelligence. And it is from the cloud that this intelligence is then accessed remotely by mobile devices, empowering clinicians, researchers and patients.
So, what does all of this mean for the datacentre industry ….
It is evident that few NHS organisations, if any, will be building new datacentres of their own. Indeed, many will be closing such facilities as they look to move to the cloud. This provides an opportunity for colocation providers to host legacy workloads that cannot be moved to the cloud and for cloud service providers not only to host workloads for these organisations, but also for the ecosystem of health tech firms that will be providing IoT and cloud based services to them.
For organisations across health and social care, research and life sciences, and pharmaceuticals, one concern is that patient identifiable data will be secure and if possible will never leave the UK. Cloud service providers that can provide a guarantee that their data will remain in a UK-sovereign data centre will have an advantage here. Such a guarantee, coupled with secure connectivity to HSCN, will provide their customers with secure access to their data, safe in the knowledge that it will always remain in the UK.
UKCloud provides a secure, UK-sovereign cloud platform that is connected to all the public sector networks (from PSN to HSCN) and works closely with an ecosystem of partners in health and public sector technology in the UK. If you want to become part of this ecosystem, get in touch.
By Steve Hone CEO, The DCA
On a regular basis we invite DCA members to submit case studies for the DCA Journal. These articles can vary in terms of subject matter, many articles highlight regular challenges operators of data centres are presented with. We have found that articles often provide details and awareness of solutions, this helps the sector move forward to overcome similar challenges.
In this month’s DCA Journal we have a case study from Schneider Electric, detailing an upgrade project at Sheffield Hallam University. What’s interesting is the approach taken by those involved. The parties working on this project viewed it as partnership and wanted to ensure that going forward they could all be available and flexible in their response for the life of the facility.
Also included is a feature from Blygold who apply a high-performance coating to the external condenser. The case study related to a project undertaken for Carillion Energy which dramatically increased the life and efficiency of the entire system, delivering an ROI inside six months, a remarkable success story for a simple solution that really does do exactly what it says on the tin!
We read with great interest an article from ebm-papst. This case study focused on the reduction of energy and cost savings through the introduction of EC fans for UBS. Three DCA members, ebm-papst, Vertiv and CBRE collaborated on this successful project, this is something we are thrilled about and look to promote more collaboration between members going forward.
The DCA exists to help its members and those with an interest in the sector. Case studies allow readers to gain trusted insight into Data Centre projects, how they are implemented and experiences gained. The ability to submit case studies is not just limited to this month’s DCA journal theme we are always interested in receiving case studies for circulation from our members and these are added to the central library for continual reference.
The DCA are working on plans for next year and have been asked to support and endorse an even larger number of Data Centre events in the year ahead. Our own event, Data Centre Transformation 2018 is now scheduled for Tuesday 3rd July 2018 at the University of Manchester.
The structure will be three tracks focusing on:
Data - Digital Business, Digital Skill and Security
Centre - Energy, Connectivity, Hybrid DC
Transformation - Open Compute, Automation, Cloud and the Connected World
So, hold the date in your diary, and plan to come along take advantage of a wide range of educational workshops and network with other DCA members and end users.
The remainder of 2017 conference season is yet again jam packed, events include:
We are coming to the end of another busy year, the DCA Journal for 2017 will finish with a ‘Review of 2017’. This will be published in the Winter edition of DCS Magazine – which covers December and January.
If you would like to submit an article please contact Amanda McFarlane. amandam@datacentrealliance.org
First port of call was to carry out a sample survey of the data centres in the Carillion estate to establish if a deep clean and Blygold Coating could improve the performance of the external air source condensers. In most cases the units were found to be regularly maintained with the coils being washed down twice a year by outside contractors.
Despite this structured maintenance approach, the coils were still being compromised by a steady build-up of dirt and calcium deposits, resulting in restricted air throughput.
It was estimated that following a Blygold treatment the life of the plant would be significantly increased with energy saving in the region of 10% and with an ROI of less than 6 months.
Based on these initial findings the client was keen to progress to a trial and a site was selected which was felt indicative of the plants within the estate. On this occasion a York chiller was selected at the Hayes data centre facility.
A week prior to the Blygold treatment taking place, the Chiller had been deep cleaned and underwent an oil/refrigerant change and a Climacheck Data Logger was used to monitor performance both before and after treatment
After Treatment
The initial results following Blygold treatment looked very promising and the system continued to be remotely monitored as full results would only be seen over time under full operating conditions. It did not take long !!
Just 2 days later we received a call from the Engineering Department who were concerned we had broken two of their condensers as they were no longer working.
After investigation, it was found that they were still in fully operation condition just no longer needed. Prior to treatment they had three big Denco Condensers working 24x7x365 to maintain cooling, however after they had been Blygold treated two out of three condensers had reverted to ‘standby mode’ as only one was now needed.
Talking with the client it soon became clear that the Engineering Department had never seen the units in standby mode before and that this had led to the understandable confusion.
‘As a result of the increased efficiency the engineers also had to visit each server room to INCREASE the temperatures by 5◦.as the server rooms were now becoming too cold!’. Bob Molinew VM
The net effect of the Blygold Coating on the York YCAJ76 Chiller was compiled in a report by an independent consultant – Dave Wright, MacWhirter Ltd, which highlighted the following key points:
‘The units now run considerably better having had the Blygold treatment I am just surprised that the units are not Blygold treated from new!’ Greg Markham, Carillion
Based on these positive results the client contracted with Blygold to carry out the same process on all of its other nineteen client sites. As a result the client has tripled the life time of the coils, reduced its energy bills by 15% and reduced wear and tear on the rest of the system resulting in lower maintenance costs, increased uptime and fewer call outs.
About Blygold UK Ltd
Blygold UK Ltd apply anti-corrosion coatings to Air Source heat exchangers such as chillers, AHUs, Air-Conditioning etc. Blygold coatings can more than triple the life of the coils blocks and reduce the energy consumption by as much as 25%, particularly on units in such corrosive environments as airports, ports, industrial areas, coastal areas and city centres. The coatings can be applied to new units prior to installation or to existing units already installed on site.
The simplest way to reduce the energy consumption in buildings is to ensure that all Heating, Ventilation & Air Conditioning (HVAC) equipment is fitted with the highest efficiency EC fans. Those involved in the data centre industry are quickly realising the energy reduction potential in their buildings through upgrading HVAC equipment to innovative Electronically Commutated (EC) Fans. The motor and control technology in GreenTech EC fans from ebm-papst has enabled UBS to benefit from proven efficient upgrades to their data centre cooling systems.
ebm-papst undertook an initial site survey to review the types of units being used and the potential solutions that were needed; along with an estimation of the payback period for any new kit. The units that were in place before the project were chilled water, with an optional switch to lower performance and utilised AC fan technology. In order to improve efficiency, ebm-papst recommended upgrading the equipment with EC fan technology. Based on the survey results, a trial was then agreed on a single 10UC and 14UC CRAC unit to establish actual performance and energy savings. Data was logged before the upgrade and again once the trial units were converted from AC to EC.
Post upgrade trial data, revealed that less power was absorbed by ebm-papst’s EC fan motors than by their AC predecessors. Based on this information, UBS decided to proceed with the conversion of all units, installing 191 fans within 76 CRAC units. Three different unit models were installed: 39x14UC units; 21x10UC units and 16xCCD900CW.
Vertiv™ then worked with CBRE (who project managed the upgrade) both to UBS’s satisfaction and without causing disruption to the live data. The main element of the upgrade project was the replacement of all fans, with ebm-papst’s EC technology direct drive centrifugal fans, including the installation of EC fans within a floor void that required modification.
Since completion of the EC Technology upgrade project, the following savings have been made:
On average, UBS has seen a 48% energy saving across all units and a payback period of under two years. Other project paybacks include a CO reduction of 5,229 tonnes. In addition to these savings, new control strategy software was put in place; which controls the EC fans on supply air temperature. This saw a further reduction of 14% in energy usage. UBS’s data centres are now also benefitting from reduced noise levels, increased cooling capacity and extended fan and unit life.
Project Challenges
UBS operates a 130,000 sq ft data centre in west London, which is fundamental to the to the operation of the firm’s global banking systems. Within this site there were a number of Down Flow Units (DFUs) operating around the clock, making them crucial to sustaining the required operating conditions for the computer equipment in the data centre. The challenge was to improve the energy efficiency of the data centre, freeing up additional electrical capacity to use on IT resource. In addition, the task was to improve the airflow and improve controllability of the cooling units in the data hall.
Project restrictions were extensive given the live data environment and the upgrade teams were only allowed access to three halls, with only one unit switched off at any one time. However the upgrade was delivered on time and to budget, without disruption. Work took place while the data centres were live; the project managers had to factor in working space and access around constraints from existing equipment and infrastructure.
ebm-papst replaced the existing DFUs in the data centre with high efficiency direct drive EC fans in the Computer Room Air Conditioning (CRAC) units. UBS’s objective for the project was to reduce the drawn-down power by up to 30%, resulting in a 180kW power reduction load to be allocated to IT equipment.
The solution resulted in a load reduction of 250 kW. This resulted in an annual power saving of 48%, which allowed UBS to increase IT power consumption in addition to reducing CO emissions and energy costs. Nearly 5 years since the project took place, UBS has seen the below key metrics:
The energy savings from the EC fan replacement project were exactly as predicted and there was no need to perform any additional analysis due to monthly energy reports being dramatically lower. The EC fans have continued to deliver energy savings, through increased reliability, resulting in a reduced maintenance burden for CBRE and UBS.
Heating, Ventilation and Air Conditioning (HVAC) systems can be responsible for over half the energy consumed by data centres. In cases where energy is limited, improving the energy efficiency of HVAC equipment will result in an improved allocation of energy resource to IT equipment. Whilst many new data centre facilities built in the UK already incorporate EC fans in their HVAC systems, most older buildings continue to use inefficient equipment. Rather than spending capital on buying brand new equipment, often the more cost-effective option is to upgrade the fans in existing equipment to new, high efficiency EC fans.
With ebm-papst GreenTech EC fans, the impeller, motor and electronics form a compact unit that is far superior to conventional AC solutions. The UBS project is an excellent example of how upgrading from AC to EC technology can impact on energy savings, carbon and CO reduction.
Sheffield Hallam University is upgrading its main data centre using state-of-the art infrastructure equipment and management software from Schneider Electric to maximise the availability, reliability and efficiency of its IT services.
Working with Advanced Power Technology (APT), an Elite Partner to Schneider Electric and specialist in data centre design, build and maintenance, Sheffield Hallam University has undertaken work to deploy a state-of-the art highly virtualised data centre as part of a £30m building development at Charles Street in central Sheffield.
APT’s installation is based on Schneider Electric InfraStruxure integrated data centre physical infrastructure solution for power, cooling and racking. The new facility is managed using StruxureWare for Data Centers™ DCIM (Data Centre Infrastructure Management) software to maximise the efficiency of data centre operations.
With a pedigree dating back to the early 19th Century, Sheffield Hallam is now the sixth largest university in the United Kingdom with more than 31,000 students, around 20% of whom are post graduates, and over 4,500 staff. One of the UK's largest providers of tuition for health and social care career paths, and teacher training, it offers around 700 courses across a wide range of disciplines including Business and administrative studies, Biological sciences, and Engineering & Technology.
The university has a range of research centres and institutes as well as specialised research groups. Research grants and contracts provide an important source of income to support work at Sheffield Hallam; in May 2013 the university was awarded £6.9m from the HEFCE Catalyst Fund to create the National Centre of Excellence for Food Engineering, to be fully operational by 2017.
Sheffield Hallam University is situated on two campuses comprising 12 major buildings in the centre of the city of Sheffield. Its IT department operates two data centres, running as an active-active pair in which each location provides primary IT services as well as offering failover support to the other.
“Services provided by the IT department are typical of those required by any university,” says Robin Jeeps, Project Manager for Sheffield Hallam. “We host the website, the intranets and common applications such as Exchange, Outlook and Office, in addition to the student management systems, virtual learning environments, library systems and CRM (customer relationship management) systems.”
In terms of hardware, the university has adopted a virtualisation policy, running between 800 and 900 Virtual Machines on about 70 blade servers distributed across both data centres. It also has a small high-performance Beowulf compute cluster to support research projects but for the most part the main concerns for the IT department are high availability, reliability and cost.
As one of the existing data centres was located in a building whose lease was due to expire, the IT department took the opportunity presented to move the IT facility into the Charles Street development and upgrade its capabilities to improve efficiency and availability.
Following a contract tender, APT was selected to provide and install the cooling and power infrastructure equipment and the DCIM software necessary to manage it efficiently. Thanks to virtualisation, the number of physical servers the university needed to maintain services had dropped from 60 devices in the older data centre to 15 in the new Charles Street facility.
“We can now run on one chassis what we would have run in three racks before,” says Robin Jeeps. “That makes a big difference.”
Located at the new Charles Street data centre, the IT equipment racks are installed within two APC by Schneider Electric InfraStruxure with Hot Aisle Containment Systems (HACS) to ensure an efficient and effective cooling supply. Two 300kW free-cooling units supply chilled water to the HACS and within the equipment racks, APC InRow cooling units maintain optimum operating temperatures.
The HACS segregates the cool air supply from the hot exhaust air, preventing both streams from mixing and enabling more precise control of the cooling according to the IT load’s requirement. At the same time, locating the InRow cooling units next to the servers and storage equipment also reduces the cooling energy requirement by eliminating the need to move large volumes of air in a suspended floor space.
Crucial to maintaining efficient operation is the adoption of Schneider Electric’s StruxureWare software. This marks the first time that Sheffield Hallam has had an integrated management system for monitoring all aspects of its data centres’ infrastructure, according to Robin Jeeps.
“We had a variety of software packages in place before,” he says. “But StruxureWare for Data Centers provides us with a much more integrated solution. As long as something has an IP address, we can see it in StruxureWare and monitor how it is working. Previously we had to go through physical switches and hard-wired cables to monitor a particular piece of equipment.”
Jeeps says that the homogenous integrated management environment proposed by APT was crucial to its winning the contract to supply the data centre infrastructure. “We kept the IT side of the contract separate from the overall development of the building,” he says. “When we studied APT’s tender we liked the clear design they presented and the consistent management of our infrastructure that it made possible.”
The new management capabilities presented by StruxureWare will allow Sheffield Hallam the flexibility to monitor its infrastructure for maximum efficiency and to manage how it makes its services available to students and researchers. Jeeps says that this will allow the university to tender for research contracts that hitherto it had not been able to do.
“We don’t currently provide cost charging or resource charging of IT services to our departments and I doubt that we ever will,” he says. “I don’t think that’s the best way for a university to operate. But if we were undertaking a research project, for example, which work on fixed funding and had to itemise how much the computing support would cost, we have the tools to do that now. We never had anything like that before.”
Another potential benefit offered by StruxureWare is the benchmarking of the overall system efficiency, especially with regard to how well the cooling infrastructure operates as a percentage of the overall power budget of the data centre. PUE (Power Usage Effectiveness) ratings are increasingly being used to compare one data centre’s efficiency with its peers.
“It’s a bit of a ‘chicken and egg’ situation,” says Jeeps. “Until we saw the capabilities of the software we didn’t know some of the monitoring, reporting and capacity planning that was now possible. Previously, we could only have done some rough calculations using a tool like Excel but the capabilities we have now will spur us on further to think about all sorts of things we can do.”
“Working with Advanced Power Technology and Schneider Electric has been an efficient and productive partnership from start to finish,” said Robin Jeeps. “The services they provided have been professional, thorough and at times very patient in terms of solving some of the challenges we’ve had to correct throughout the deployment stages. They remained focused on delivering an intricate solution that would meet our expectations and point of view as a customer, at all times.”
John Thompson from APT explains; “when we build a data centre for one of our clients we look on the relationship as a partnership. It is very important for us to understand the long-term requirements so that we can design for future possibilities in order to remain available and flexible in our response, throughout the life of the facility. This is one of the reasons we chose to deploy a complete Schneider Electric ‘engineered as a system’ data centre solution for the Charles Street room. To begin we built a virtual data centre within the StruxureWare for Data Centers™ software suite, so that the stakeholders could have a ‘3D walk round’ and provide feedback on the solution they were getting prior to delivery. Whilst this resulted in quite a few design revisions it helped to ensure that APT delivered exactly what was expected.”
CASE STUDY: akquinet AG uses speedikon FM AG’s DCIM-solution for service- and support management of their customers in their ISO-certified Data Centres
akquinet AG is a non-listed company. It operates 4 ISO-certified Data Centres with a total area of 4150 m2 at four sites in Hamburg, Norderstedt and ltzehoe. Competent employees and state-of-the-art technology ensure individually customised, high-tech DC outsourcing solutions, which guarantee a fail-safe and cost-efficient operation. From housing to the complete system service, all requirements are met on highest security standard round the clock. Extensive services are being offered in document and output management.
Since 2013, akquinet has operated two new Data Centres and sets new standards in their operation. It is not surprising that this innovative company has already won the ‘German Data Centre Prize 2013’ as well as the first international prize for the best new ‘Data centre of the Year’. In addition to the standards for data and access security and technical equipment, which are both up-to-date, the energy efficiency of the new system areas is also remarkable – they reach top ratings in Germany. Under the heading ‘Green IT’ both Data Centres operate under ecological and economic aspects.
‘As we have a broad customer base in our Data Centres, it was clear from the beginning that we needed an efficient DCIM solution. It should meet the various requirements of all departments involved and it should also be operated economically. Every area, the servers, the network, the cabling or the facility management has its different focus and, logically, its individual preferences’, explains Thomas Tauer, Board Member of akquinet AG.
‘The search was relatively easy’ emphasizes Thomas Tauer. ‘We have looked at what is available on the market. We then contacted five to six DCIM manufacturers we knew, and analysed their range of functionality via webinars. We also verified how the solutions differ from each other. In this process the reputation in the market of the respective software houses was as important to us as the size of the manufacturers, their experience in project implementation, their international distribution, their company structure and history’.
‘The development road map for the future as well as the respective licence model, offering a good price-performance-ratio, also played a significant role’.
‘We found the required system partner in speedikon FM AG, we were convinced by their DCIM solution due to an intuitive, self-explanatory GUI, high flexibility and agility, which significantly reduce the training requirements. The licence model and the annual maintenance was also a key issue, which allowed us a one-time investment. Besides, speedikon FM AG not only supports the IT-process side, but also various FM processes in Real Estate operations’ Thomas Tauer explains.
akquinet’s DC-services range from basic housing of Data Centre space, including the supply of the entire IT infrastructure up to complete outsourcing models. The entire service range is reflected in the DCIM system. The entire power consumption is displayed via DCIM, but not in real time. The energy data is transferred from the BMS system to check the energy consumption at rack-level. The rack is the smallest unit a customer can rent- there is no further differentiation.
Having a central DC Management process is essential as it needs to accommodate client hardware of any kind; for instance, when a device enters the Data Centre, how and which media does it need to integrate or ultimately connect to? The inventory documentation is crucial, both in the passive and in the active environment. What is the load on my resources? (rack space, weight, power, cooling) also requires a detailed answer, as well as questions like how much power will the individual rack consume.
The various requirements of akquinet’s customers are supported very effectively, for example if a customer wants a new server placed on a particular rack on a particular day;
akquinet handles the request according to the agreement including the necessary patching. The customer can follow this process in the DCIM as they also have access to the system. The entire dialog with the customer is supported by the DCIM and akquinet allowing for the order to be completed in a logical and efficient way.
In addition to its internal DCIM use, akquinet offers more than 100 customers a broad Data Centre service portfolio. These services include relocations within the Data Centre itself as well as across different locations. The moves and the subsequent settlement including the relocation planning are carried out in DAMS. This includes not just the cabling, but also the display of the whole network topology. The DCIM is also used as a planning tool for large scale moves.
With the multi-client capability of DAMS, it is possible to ensure customers only have access and control over their own area and are unable to see or make changes to other clients.
The customer has a clear visual presentation of his entire Data Centre and can see the racks with the largest power consumption and highest heat output via the 2D and 3D-visualisation. Hotspots can be identified immediately. It’s a lot clearer and easier to understand, once you have a 3D-visualisation. This also provides the customer with the necessary security.
The DCIM application brings significant benefits to large move projects. “It can happen with thousands of cables that one is not patched correctly. It’s possible to immediately carry out an error analysis and follow the exact cable path.” emphasizes Thomas Tauer, and continues, “There have also been concrete monetary benefits because now we can offer services that we could never offer before. We manage the infrastructure of our large customers in full scope – without DCIM this would never have been possible, and we also get rewarded, too.”
The new services have triggered new business ideas. The DCIM-tool generates more automation in key processes. This is reassuring for customers with extremely time-critical projects, which must be carried out during the weekend under considerable time pressure. Customers can feel reassured and comfortable due to the elevated level of security and trust akquinet provides.
Significant time and effort can be saved, when tasks are drastically reduced and no one has to be directly on-site to check anything. Thanks to the central data management, the user has direct access to all data. Saving time means immediately saving money. ‘The easy interfacing of the DCIM to other tools – such as a CMDB and the BMS system on the technical side, is essential’, says Johannes Liebrecht, responsible system administrator for the entire existing data pool.
Thomas
Tauer added ‘It has proven to be a clear success factor in Data Centre
management operations, when one person acting as a system administrator has
overall responsibility for the entire data pool, this increases both precision
and professionalism’.
For
the dual site location akquinet received the prestigious TÜV IT Level IV
certificate, which to date has been achieved by only one other Data Centre in
Germany. In this context it is interesting to mention that this Data Centre is
also managed with DAMS.
speedikon FM AG is an innovative software company specialising in the digitisation of technical / business processes in buildings, industrial installations and Data Centres. In addition to products, solutions and technologies, speedikon FM AG offers since 1997 services that provide intelligence to business processes in relation to assets. speedikon FM AG employees have extensive experience in dealing with large data volumes, complex databases and integration with existing software and hardware solutions.
speedikon FM AG understands complex tasks and map these into innovative IT solutions improving working lives through the use of new and practical technologies. speedikon FM AG also apply the latest methods and procedures to advanced development projects for customers, with prototypes to support.
Furthermore, speedikon FM AG has got an intense focus on developing generic, easy-to- use user interfaces as well as on increasing the efficiency of processes. In spite of all innovations and progress in a rapidly changing business, we guarantee that investments in data and processes are protected, also long-term.
For more information: www.speedikonfm.com
The SVC Awards celebrate achievements in Storage, Cloud and Digitalisation, rewarding the products, projects and services as well as honouring companies and teams. The SVC Awards recognise the achievements of end-users, channel partners and vendors alike and in the case of the end-user category there will also be an award made to the supplier who nominated the winning organisation.
Voting is free of charge and must be made online at www.svcawards.com
Voting remains open until 3 November so there is still just time to cast your vote count and express your opinion on the companies that you believe deserve recognition in the SVC arena.
The winners will be announced at a gala ceremony on 23 November at the Hilton London Paddington Hotel. Contact the team and join the Storage, Cloud and Digitalisation community as it celebrates the best in the business.
All voting takes place on line and voting rules apply. Make sure you place your votes by 3 November when voting closes. Visit : www.svcawards.com
Below is the full shortlist for 2017 SVC Awards:
Storage Project of the Year
Cohesity supporting Colliers International
DataCore Software supporting Grundon Waste Management
Mavin Global supporting The Weetabix Food Company
Cloud / Infrastructure Project of the Year
Axess Systems supporting Nottingham Community Housing Association
Correlata Solutions supporting insurance company client
Navisite supporting Safeline
Hyper-convergence Project of the Year
HyperGrid supporting Tearfund
Pivto3 supporting Bone Consult
UK Managed Services Provider of the Year
EACS
EBC Group
Mirus IT Solutions
netConsult
Six Degrees Group
Storm Internet
Vendor Channel Program of the Year
NetApp
Pivot3
Veeam Software
International Managed Services Provider of the Year
Alert Logic
Claranet
Datapipe
Backup and Recovery / Archive Product of the Year
Acronis – Backup 12.5
Altaro Software – VM Backup
Arcserve - UDP
Databarracks – DraaS, BaaS, BCaaS solutions
Drobo – 5N2
NetApp – BaaS solution
Quest – Rapid Recovery
StorageCraft – Disaster Recovery Solution
Tarmin – GridBank
Cloud-specific Backup and Recovery / Archive Product of the Year
Acronis – Backup 12.5
CloudRanger – SaaS platform
Datto – Total Data Protection platform
StorageCraft – Cloud Services
Veeam Software - Backup & Replication v9.5
Storage Management Product of the Year
Open-E – JovianDSS
SUSE – Enterprise Storage 4
Tarmin – GridBank Data Management platform
Virtual Instruments – VirtualWisdom
Software Defined / Object Storage Product of the Year
Cloudian – HyperStore
DDN Storage – Web Object Scaler (WOS)
SUSE – Enterprise Storage 4
Software Defined Infrastructure Product of the Year
Anuta Networks – NCX 6.0
Cohesive Networks – VNS3
Runecast Solutions – Analyzer
Silver Peak – Unity EdgeConnect
SUSE – OpenStack Cloud 7
Hyper-convergence Solution of the Year
Pivot3 - Acuity Hyperconverged Software Platform
Scale Computing - HC3
Syneto - HYPERSeries 3000
Hyper-converged Backup and Recovery Product of the Year
Cohesity – DataProtect
ExaGrid - HCSS for Backup
Syneto - HYPERSeries 3000
PaaS Solution of the Year
CAST Highlight - CloudReady Index
Navicat – Premium
SnapLogic - Enterprise Integration Cloud
SaaS Solution of the Year
Adaptive Insights – Adaptive Suite
Impartner – PRM
IPC Systems - Unigy 360
Ixia - CloudLens Public
SaltDNA - Secure Enterprise Communications
x.news information technology gmbh – x.news
IT Security as a Service Solution of the Year
Alert Logic – Cloud Defender
Barracuda Networks - Essentials for Office 365
SaltDNA - Secure Enterprise Communications
Votiro - Content Disarm and Reconstruction technology
Cloud Management Product of the Year
CenturyLink - Cloud Application Manager
Geminaire - Resiliency Management Platform
Highlight - See Clearly - Business Performance Acceleration
HyperGrid – HyperCloud
Rubrik – CDM platform
SUSE - OpenStack Cloud 7
Zerto - Virtual Replication
Storage Company of the Year
Acronis
Altaro Software
DDN Storage
NetApp
Virtual Instruments
Cloud Company of the Year
Databarracks
Navisite
Six Degrees Group
Storm Internet
Hyper-convergence Company of the Year
Cohesity
Pivot3
Syneto
Storage Innovation of the Year
Acronis - Backup 12.5
Altaro Software - VM Backup for MSP’s
DDN Storage - Infinite Memory Engine
Excelero – NVMesh
Nexsan – Unity
Cloud Innovation of the Year
CloudRanger – Server Management platform
IPC Systems - Unigy 360
SaltDNA - Secure Enterprise Communications
StaffConnect - Mobile App Platform
Zerto - ZVR 5.5
Hyper-convergence Innovation of the Year
Pivot3 - Acuity HCI Platform
Schneider Electric - Micro Data Centre Solutions
Syneto - HYPERSeries 3000
Digitalisation Innovation of the Year
Asperitas – Immersed Computing
IGEL - UD Pocket
Loom Systems - AI-powered log analysis platform
MapR – XD
For more information and to vote visit: www.svcawards.com
The continued rapid rise of data creation will provide immeasurable opportunities for companies to gain advantage in the market, but they must take proactive actions in order to be a step ahead of others. Business leaders need to increase their focus on the computing trends driving data growth over the next several years and revisit policies to assess the value of data throughout its lifecycle from creation, collection, utilization to its long-term management.
By Wayne M. Adams, SNIA Board of Directors and Chair of the SNIA Green Storage Initiative.
International Data Corporation (IDC)1 has predicted that data creation will continue to grow year over year to a total of 163 zettabytes (ZB) by 2025. IDC also states that the industry will transition from a trend of consumers being the largest creators of the world’s data, to where enterprises will become the larger creator again, creating 60% of the world’s data in 2025. Computing trends like IoT, machine learning, and other types of Artificial Intelligence (AI) based data analysis/decision making and many others are behind these data growth trends
Within IT though the era of greenwashing has long passed, there remains the need to be energy efficient and to further optimize limited resources as the ongoing top priority to have increased computational resources and pools of data are required to drive a business. IT users continue to look for effective approaches to select technologies and products. For data center storage, there is a collection of standard energy efficiency metrics that enable IT decision makers to objectively compare a range of possible solutions and to manage a solution once deployed.
The SNIA Emerald™ Program provides a standardized way of reporting vendor-performed test results that characterize the several aspects of storage system energy usage and efficiency. For procurement metrics, SNIA’s Emerald™ program and specification defines energy usage metrics for Block IO and the recently released File IO metrics that provides an energy usage profile on how a storage system will work in configurations optimized for transaction performance capacity or streaming.
The USA EPA Energy Star® Data Center Storage Program references the use of SNIA’s Emerald™ Energy Efficiency Measurement Specification Metrics for Block IO storage system configurations. The EPA maintains a public repository of vendor tested products since 2014, where many many vendors are listed with a range of products.
The SNIA Emerald Block IO metrics are:
With the release of SNIA Emerald V3 specification in September 2017, the File IO metrics are based on the following application workloads:
Additional considerations during the procurement phase to select a solution for your IT requirements includes a systems Reliability, Availability, Serviceability (RAS) features, capacity optimization features, and type of physical media being selected. All of these factor into a system’s energy efficiency profile, so when contrasting solutions, keep these in mind.
The more RAS features will increase additional controller functionality and or systems to be running additional logic, which can add to the system energy usage. Capacity optimization technologies enable a system to store a data set size in a smaller physical storage size, which can reduce energy usage. Disk storage types from Hard Disk Drive (HDD) to Solid State Drives (SSD) have different energy usage profiles. Within HDD, there are rotational speeds and data placement considerations.
The SNIA Emerald specification recommends that these attributes be part of a system test report so the reader understands why there can be variations in metrics when looking at two systems configured with the same base hardware. RAS and capacity optimization technologies, each feature by themselves may be uneventful, but when combined there can be positive, additive improvements for reduced energy usage. Capacity optimization refers to a set of techniques which collectively reduce the amount of storage necessary to meet storage objectives. Reduced use of storage (or increased utilization of raw storage) will result in less energy usage for a given task or objective. Each of these techniques is known as a Capacity Optimizing Method (COM).
COMs are largely, though not completely, independent. They provide benefit in any combination, though their combined effect does not precisely equal the sum of their individual impacts. Nonetheless, since data sets vary greatly, a hybrid approach using as many techniques as possible is more likely to minimize the capacity requirements of any given data set, and therefore is also likely to achieve the best results over the universe of data sets. In addition, the space savings achievable through the different COMs are sufficiently close to one another that they are roughly equivalent in storage capacity impact.
A commonl assumption is that certain space consuming practices are essential to the storage of data at a data center class service level.
In the SNIA Emerald™ Energy Measurement Specification, tests for the presence of the following COMs are defined as:
SNIA and the Green Grid organization collaborated on a whitepaper titled, “The Green Grid Data Center Storage Productivity Metrics: Application of Storage System Productivity Operational Metrics”.
DCsP represents a set of operational phase metrics that observe storage system productivity while the data center runs normal or “real-world” workloads. These metrics are conceptually the same as those defined for the acquisition phase including aspects of capacity and performance. All are needed to completely characterize storage systems.
Although similar to the procurement metrics, these DCsP operational metrics differ in their measurement and usage aspects. The majority of “real-world” workloads represent actual data center information produced by at least one or more applications. Most of IT equipment, among other requirements, are required to run 24/7. This last-mentioned availability aspect is particularly important for storage systems as it makes the real time gathering of the operational metrics essential for good analysis.
The metrics to be calculated, based upon a storage systems operational information to be collected, polled, or stored in a Data Center Information Management (CIM) tool, based on the storage configuration for the applications it is supporting are as follows:
For more information on SNIA resources and programs for data storage energy efficiency, including the SNIA Emerald program, IT planning resources, and education materials, please visit https://www.snia.org/energy or email emerald@snia.org.
1 SDC reference: http://www.seagate.com/www-content/our-story/trends/files/Seagate-WP-DataAge2025-March-2017.pdf
With many organisations focusing on digital transformation initiatives, enterprise networks are being used in new ways that are stretching their existing capabilities – sometimes to breaking point.
By Tony Judd, Managing Director, Enterprise EMEA at Verizon.
The emergence of technologies such as cloud applications and AI-driven robotics, highlight the need for next generation networks, which are more flexible and able to keep pace with changes in capacity and usage. Software Defined Networking (SDN) is key to this. SDN helps organisations satiate these needs by enabling networks to be managed more dynamically and scaling capacity on an ad-hoc basis; ensuring they always meet business demands in a cost-effective way.
To help organisations understand how SDN can fit in with their network strategies, we’ve put together 10 reasons why businesses should be embracing SDN as part of their digital transformation:
Simplify network complexity to increase agility: SDN allows enterprises to keep up with the changing nature of their businesses – enabling them to be more responsive to users, customers and market opportunities. SDN technology can simplify the networking complexity resulting from the increased movement of apps across mobile and cloud platforms to help keep the organisation more agile.
Scale up or down automatically: A robust SDN strategy enables a network to be elastic and flexible enough to handle high traffic demands during the busiest times, while prioritising the most important applications and reducing congestion through intelligent automation.
Contain Costs: Intelligent network solutions can be deployed to provide connectivity to the cloud on an as-needed basis to keep costs under control, which allows organisations to pay for the service they us; extending the cloud consumption model to also cover the network connectivity layer.
Virtualise the Network: Fundamentally, SDN is about separating how data is controlled (data intelligence) from the flow of the data. In an SDN environment, the intelligence behind how to treat data is separated from the actual physical transmission of the data. This abstraction allows enterprises to benefit from network function virtualisation: moving many of the network jobs that currently exist as dedicated physical devices into software instead.
Open testbed: Using a SDN solution will allow IT teams to test new processes without impacting the network, thus making it easier to implement new solutions as part of a digital transformation strategy. Harness the Power of Predictive Analytics: Businesses can stay one step ahead by not only accessing information on how applications are performing in the network, but also using that data to explore what parameters could be changed to further improve an application’s performance.
Build More Sophisticated Solutions: Security threats are increasing and pushing enterprises to implement multiple enhanced solutions to counter security threats. SDN allows for more sophisticated solutions, coupled with an analytics layer, which addresses the problem of vendor lock-in.
Protecting the Perimeter: Bake security into the network layer to help protect against potential breaches of business-critical data, with tools such as the Software Defined Perimeter (SDP) which takes an "non-discoverability" approach to enable secure access to devices and applications across a public cloud.
Enable Centralised Management: Centralised management allows for more agile operations and execution, which, in turn, helps control costs. Also, full end-to-end service automation enables provisioning of network resources – reducing errors and improving service levels, while using API’s to deliver seamless near real-time management of the network functions and traffic.
A Deeper, Richer Experience: SDN gives enterprises access to a more sophisticated network feature set and service experience that help improve operational efficiency by implementing functionality at the speed of software.
Software defined networking is the key that businesses will require to unlock the full potential of their digital transformation initiatives. Without it, new solutions will fall flat as they won’t be backed by enterprise networks that can cope with rising demand and required flexibility - not to mention the increased cost when traditional networks are used. Being able to dynamically manage capacity alongside usage, and tweak prioritisation for key applications, will allow businesses to get the best bang for their buck and help them thrive in a more competitive environment. As such, network transformation must go hand-in-hand with digital transformation initiatives to ensure everything runs at its best.
Increasing power density within data centres is an ongoing challenge for facilities managers. And the continual cycle of increasing power requirements has too often translated into more cables and whips under the floor. But Starline Track Busway is self-contained, customisable and flexible. So you can avoid a jungle of wires, and enjoy power expansion in minutes, not weeks. To learn more about our maintenance-free, reliable systems, visit StarlinePower.com.
When I look at the current state of cybersecurity, it seems obvious that something is seriously broken. According to a recent Juniper Research report, global cybersecurity spend will reach nearly $135 billion in 2022, up $93 from an estimated billion this year.
By Fraser Kyne, EMEA CTO, Bromium.
You would hope that all this added investment would help tip the balance in favour of the good guys, but instead we just see cybercrime continue to soar – Juniper predicts data breaches will cost global business a total of $8 trillion in fines, lost business and remediation costs over the next five years. It’s not surprising that 60% of CIOs surveyed feel they are losing the battle against cybercrime.
We see a similar tale playing out when looking at the war on drugs. In America alone, taxpayers are paying more than $6,000 per second towards law enforcement, prevention and treatment, and resources dedicated to fighting drug trafficking. Despite this, the international comparators report released by the Home Office concluded there is no evidence, from any country, that the level of drug law enforcement has a discernible effect on the prevalence of drug use.
This all begs the question: are we doomed to fail?
In both our cyber and drug wars, the ‘user’ plays a big role as a last line of defence. The drug industry would fail to thrive if people would obey the law and follow the prohibition line. The same is true of cybercrime. Despite efforts to prohibit and control user behaviour, to educate on the dangers, and to enforce better behaviour through punishment, people will still find a way of bypassing security – if they can gain a short-term benefit, the seemingly distant risk of causing a data breach (or even arrest in the case of drugs) will factor pretty low on their priority list. They will take the risk, as no one ever thinks it will be them that gets caught out. In fact, 85% of CIOs say that people are the weakest link in security, ignoring or forgetting the education, policies and procedures enterprises have put in place to prevent risky behaviour.
In trying to solve the ‘human’ problem, companies have deployed arsenals of security tools designed to detect bad guys and monitor users. But all of this security can become a burden on the user. People feel like they are being watched and hampered, in what is essentially an unwinnable fight. If you are connected to the internet you will always be at risk. For some roles you need to be able to open attachments from people you do not know – take HR for instance. The system is unworkable. Yet the whole discourse around cybercrime is very victim-blaming, calling users stupid and accusing them of putting the company at risk. Much like our drug war criminalises users, so do our cyber policies.
Ultimately, it is not fair to burden users with the responsibility of keeping cybercriminals at bay. One of the reasons ransomware, phishing and malware are so successful is that it is a fact of life that we all make mistakes. Once we accept that as a fact of life, we can start to look at different ways to approach the problem: an end to prohibition.
A sad reality of drug prohibition is that in the real-world, we cannot put people in protective bubbles safe from harm. If they want to do something, they will. Yet in the online world we can, by using micro-virtualisation. By taking a virtualisation-based approach to security you can create safe zones where people can download malware and click on ransomware without fear that it will infect the PC nor the rest of the network. The malware is contained. As long as they aren’t purposefully hurting anyone – i.e. they are not an insider threat trying to steal data – then you can leave employees to click and download to their hearts’ content. By creating disposable, single-use mini-virtual machines for each and every document, web page or email attachment a user interacts with, any malware or ransomware that resides there cannot spread: it is contained.
Yet containment alone will not solve the big picture problem. While micro-virtualisation may prevent any threats from spreading from endpoints where it is used, there is a need to protect the wider IT society as well. This is where intelligence gathering comes to the forefront.
One of the reasons drug prohibition fails is that law enforcement simply can’t detect and monitor every single person who might be taking part in the distribution of drugs. There are just too many possibilities and leads to follow. Attempts to do so, with ‘stop and search’, are time-consuming and often futile, as dealers can just get an understanding of how the police operate and change tactics to get around them. This is why you need better intelligence.
Because micro-virtualisation allows malware and ransomware to execute, security teams get a lot more data on its behaviour. This intelligence can then be shared and used to strengthen defences for all. It can also help forensics teams, and the police, to start to track where threats are coming from so that we can start to bring the criminals to account.
By decriminalising user behaviour and creating safe bubbles, we can gather intelligence that nobody else can get to – in essence, we create an army of informant machines that can provide police with the vital pieces of the puzzle that can lead them up the food chain.
Only by accepting that our experiment with cyber-prohibition is failing and a new approach is needed can we start to change the tide. There’s a long road ahead, but it starts with accepting that technology needs to adapt to human behaviour and not the other way round. Businesses need to stop the insanity of repeating the same thing and expecting different results; end user prohibition today!
How organisations can replace manual processes, free the IT-team from mundane tasks and improve efficiency by building enterprise cloud environments and self-service portals
By Brandon Salmon, Director of Technical Strategy at Tintri
In the past, provisioning IT resources was a manual, time-consuming and tedious process. A developer or business manager requesting resources had to wait days or weeks while the requests got passed around between server and network admins or whatever department was responsible for signing off the additional resource.
However, this approach is coming under increasing pressure. Users are asking why obtaining additional resources to support their latest project from their IT department is not as easy as using their private Amazon, Dropbox or Apple cloud accounts.
Building self-service portals to provision resources can help organisations replace manual processes, free their busy IT teams from mundane and complex tasks and improve overall organisational efficiency. However, adding a self-service portal to an organisation’s IT is only possible if the underlying infrastructure can support it. Before investing in building a self-service portal, it’s vital that an organisation evaluates its underlying infrastructure.
At a consumer level, self-service has already become commonplace. Public cloud providers have added convenient portals to their offerings, making it possible to buy services in seconds. Organisations are looking to replicate this model with self-service portals that can make the internal provisioning of IT resources as easy as one, two, click.
One of the benefits of a self-service portal is that users who are not IT experts can provision IT services without any knowledge of managing complex IT infrastructure. That’s a good thing because, in a self-service environment, it’s hard to know what types of applications users will deploy. For instance, a user can simply provision a VM, set Quality of Service (QoS) for that VM, and specify a snapshot and replication schedule, all in a very intuitive fashion – without being an expert.
DevOps environments can also benefit hugely from self-service. Provisioning and refreshing development environments are common tasks that should be automated to streamline and accelerate the process. Automation, with the right tools, allows DevOps teams to refresh data in development and test environments within minutes. This accelerates continuous integration and the deployment of new software features. The right features for developers can be made available through a self-service portal to create and update development and test environments with current code and data whenever needed.
The platform should be built with a web-services approach that offers important core features and high levels of granularity and abstraction. Clean REST APIs are also needed for all enterprise cloud capabilities to ensure all functions can be automated no matter what self-service platform is used at the front-end.
Another important element of self-service is integration with higher-level tools and platforms. In addition to REST APIs, the right platform should integrate with OpenStack, vRealize Operations and Orchestration, Python SDK and PowerShell. This provides the fundamental building blocks for integration, satisfies any automation requirements and makes enterprise cloud easier to automate and consume in a wide variety of situations.
Traditional infrastructures are unfit to serve as a foundation for a self-service portal, as they simply cannot meet the requirements for granularity and abstraction. On the other hand, it is hard to find classic features, including space-efficient snapshots, replication and other value-added capabilities, in public cloud environments as they usually do not operate at the right level of granularity. To get the best of both worlds, organisations should build their own cloud, based on an enterprise cloud platform, that runs in their own data centre, allowing for both the control of private cloud and the agility of public cloud.
The adoption of an enterprise cloud platform is a solid solution to the problem. The platform fully satisfies the requirements for features, abstraction and granularity – allowing it to be easily incorporated into a self-service portal. This level of granularity enables self-service users to provision each VM with the exact services required. Administrators who have to guarantee performance for all business applications at all times, would have the ability to see performance problems and eliminate them.
The infrastructure an organisation chooses has a direct impact on the functionality of its self-service portal. A modern enterprise cloud platform built from scratch for virtualisation and cloud-native workloads overcomes the limitations of conventional infrastructures by providing the right set of core features operating at the right granularity. This makes it simple for IT teams to expose a rich set of IT services through self-service. Higher-level integrations simplify the automation process to address needs across a range of operating environments. With the right platform in place, organisations can even automate advanced IT functions and make them available through self-service in ways that traditional platforms can’t.
IIOT, enterprise and the data centre.
By Thomas Kovanic, Panduit.
The need to minimise latency in critical data paths, with systems that require real-time responses, has swung the computing pendulum back towards local systems, in what we are today calling edge computing. The further the IT industry distances us from our data, the slower our response to any action request will be. Therefore, keeping real-time actionable data as close to the data source or the data user will optimise the systems’ opportunity to respond effectively, and in some cases, avert a catastrophe.
Organisations’ capability to generate, use and store data has driven us to the zettabyte (ZB) era, and estimates state that global IP traffic will reach 2.3ZB by 2020. The reaction to this flood of data is to migrate expensive hardware and software to centralised off-site data centres, housing our total corporate IT platform, applications and our valuable data. However, every step away from the data source introduces a gateway, switch or other delay in the ability to act in real-time. Data centre operators, even owned data centres, frequently reorganise data locations to optimise their capacity loading with the result that our data could be located across the globe, introducing higher latency into our communication link.
The Internet’s and therefore cloud computing’s key advantage is resilience; the global network’s capability to reroute data around a network problem and delivering it to its destination. It is that rerouting, that can exaggerate the problem of latency and network jitter. Additionally, data packets may be damaged while on their way to their destination, which introduces recompiling latency and possible errors at the end-point, requiring data to be resent, exacerbating the situation.
As a response, Edge Computing is akin to older style corporate computing, where the data centre is in-house, but this does not mean that it is a throw-back to a computing dark age. In fact, Gartner’s “Hype Cycle for Emerging Technologies, 2017” states that ‘edge computing is on the verge of becoming an innovation trigger’, and includes edge computing on the list of key platform-enabling technologies to track. Today’s edge computing returns the compute resource to close to the data source to provide real-time processing.
Of course, the vast majority of data consumed is pretty much impervious to the delay and overall latency that is a side effect of the data centre’s location. We have all observed delays while watching live TV, or on a business skype call or any number of other small incidents of data delay over the network, which are irritating, but not costly. However, there are a growing number of real time requirements and none more so than in the Industrial Internet of Things (IIoT) environment, where a company’s ability to respond in a defined time scale is imperative.
It is in this IIoT environment where Gartner’s’ ‘innovation triggers’ are going to have a massive impact. Segregating time sensitive networks that have real-time response requirements, within an edge network and away from the off-premises corporate networks, will safeguard an organisation’s capability to collect, process and respond to IIoT data in a much more efficient manner.
Networked manufacturing, such as industrial automation systems, provide increasingly high value data to monitor control systems across the factory floor. Automated manufacturing operates with fewer human interactions and increased reliance on sensors monitoring systems and processes for faults or errors. Sensors across the system constantly relay data to the control system which monitors the data stream analysing it for anomalies. When errors are detected, the system responds with control signals, which may stop the manufacturing process, or correct the fault within an approved timescale, and then monitor the change to ensure a satisfactory outcome. Halting the process may be an extreme case and involve human intervention before the process can be restarted, but may be required to prevent a costly manufacturing fault in an expensive end-product. Innovations may not necessarily remove the need for human interaction, but will change the skill sets of the human technicians on the floor, therefore less machine operators and more network specialists may be required to implement the new generation of factory automation.
Essential in the development of edge computing capabilities is the choice of architecture and technologies used to implement the solution.
Diagram 1. Three Tier Architecture verses Spine and Leaf
Image 2. Fibre Optic Cable Offers Advantages
Effective edge computing requires a design that identifies the real-time response applications, including mission critical systems and IIoT, and deploys a solution that provides the architecture for this, whilst ensuring interaction with the wider non-time critical data located on corporate data centres or in the cloud. As the growth in data from industrial automation increases the capabilities of the network and processing required to optimise the factory floor becomes more important. Real-time systems at the edge can generate real operational benefit, while allowing the organisation to utilise the cloud as a repository for continuously updated data to be processed, analysed and stored and used for trend analysis and other business requirements. Edge computing is not an end in itself, it offers real-time capability in a global environment, where data holds an increasing amount of value. However, it is the ability to react effectively to that data that makes it priceless.
On May 25th, 2018, the EU’s General Data Protection Regulation (GDPR) comes into force. It will shake up the unacceptably slack standards that have been applied to the privacy of citizens’ data to date and is very likely to provide a decisive push towards the widespread adoption of strong encryption among businesses worldwide.
By Shaun Orpen, Commercial Director at Scentrics.
Unlike the directives that form most the EU’s legal output, and which can be interpreted somewhat differently by member states, GDPR is a regulation and, in eight months, becomes a binding, uniform law in each nation. That’s part of the point: historically, different nations in the Union had different requirements, complicating the practice of doing business with the EU. Uniform regulation on data privacy makes it simpler to do business across the Union, both for its members and the rest of the world.
The UK government announced it would follow the legislation, Brexit notwithstanding, in the Queen’s speech in June. And even if the Government changes its mind, UK companies hoping to trade with EU member states, still comprising the world's largest single market, will need to comply and accept the terms of the regulation, whatever local laws might be introduced as an alternative.
GDPR is something of a victory for consumers. It aims to give protection to EU citizens over the privacy of their personal data. The level and depth of privacy breaches from private companies in possession of people’s personal information has been deemed far too high, and fierce punishments will be visited upon organisations that continue to fail to exercise proper vigilance. The penalties for non-compliance or infringement are 4% of previous year annual turnover, and/or €20 million.
It's high time that some action was taken to try to make people’s information safer on the Internet: online fraud is now the most common crime in the UK, with one in ten of the nation’s citizens having fallen victim already.
It's not quite the consumer revolution many might have hoped for, though. The larger internet giants - the likes of Facebook and Google - have already immunised themselves against the colossal fines introduced through the directive by amending the terms of service we all blithely click through, to guard themselves against prosecution.
For many other businesses, though, particularly mid-sized and smaller organisations, these potentially fatal penalties create considerable cause for alarm.
GDPR affects both what it describes as the “controllers” and the “processors” of personal data. “Controllers” determine how and why data is processed, and largely comprise the brands and businesses most people will be used to doing business with on the Internet. “Processors” handle data on the controllers’ behalf and include the likes of fulfilment and logistics companies, all sorts of agencies, advertising networks and more. Typically, mid-sized companies or smaller.
The administrative load and technical expertise required is extremely heavy for these smaller organisations.
Alongside guarding against the continuing threat from hackers and malware, the Regulation requires processors to notify the authorities of any breach, however small it may be, within 72 hours.
More types of digital information, such as IP addresses, now fall under the definition of personal data.
Consent must be given by parents for the retention of children’s data, for the specific purposes for which it will be used, and controllers must be able to prove that such consent has been obtained. This measure alone must be giving CIOs sleepless nights on a regular basis.
GDPR will also codify the regulations around people’s “right to be forgotten” on the Internet, give them a right to know if their data has been compromised, a right to transfer their data, the ability to access their data and understand how it is processed.
So, there are a substantial number of new legal responsibilities, and massive penalties should organisations be found lacking in their commitment to these responsibilities. While it doesn’t seem as though data security could be described as a key priority for many internet-based companies heretofore, from now on, it must be.
At this point, encryption technology should seem like a very sensible piece of insurance for any company with personal information about its users and computers connected to the Internet. If the last 30 years of Internet history has shown us anything, then it's that technology will be hacked. Even the best experts in the field agree that "security is a journey", constantly moving forwards to evade the grasp of bad actors who are always only inches behind the most vigilant and best-resourced companies and organisations on the Net.
For many small and medium-sized companies, the continual upgrades, testing and on-site expertise required to meet secure standards will be a difficult and expensive path to follow. So, they will, very understandably, outsource, at the very least, the finer details of security to third-parties and take out insurance policies to limit their exposure. Both of those things cost money, but nothing in the region of the penalties that falling foul of the GDPR police would attract.
Modern, strong encryption - applied as a transparent service to files, communications and other identifiable data - provides an extra layer of safety. One that will be both less expensive and more powerful than either third-party warranties or insurance policies.
Your server might be hacked, your regular security measures foiled - these things happen every day. But if the files that fall into hackers' hands are also securely encrypted, then those criminals are left with nothing that could compromise your customers or - under GDPR - your entire business. You might think of encryption-as-a-service as being analogous to a bank safe in pre-electronic times. There's a lock on the front door of the bank, various other security measures, but for peace of mind, the money and other sensitive documents sit inside a safe overnight.
Would you trust your money with a bank that didn't have a safe? That's precisely the argument for businesses to adopt wholesale encryption immediately.
The waiting game is over when it comes to the internet of things (IoT). With the growing amount of data flowing between these internet-connected devices, harnessing this data is a no-brainer for organisations who want a competitive edge.
By Matt Smith, CTO, Software AG.
Much like cloud computing, businesses are now considered to be behind the times if they do not have a strategy in place to make the most of IoT within their industry. Recent market research showed that although nearly 90% of execs feel the Industrial Internet of Things (IIoT) is critical to their companies’ success, only 16% have a comprehensive IIoT roadmap. With industry 4.0 on our door steps, businesses cannot afford to be left behind. So, what are the secrets for planning and executing a winning IoT strategy?
The IoT presents an unmissable opportunity to harness data. For example, the ability to conduct detailed analysis and develop strategic plans based on the data gathered can really help to enhance the performance of the overall business.
One of the industries in which we have seen IoT take off is manufacturing. With IoT, manufacturers optimise their machine productivity and reliability to meet their business goals. Manufacturers can also use real-time insights gathered from their sensors to develop more effective and actionable solutions. Furthermore, this enables them to be more proactive when it comes to machine maintenance, reducing costs and saving on down-time.
Another example of an industry in which IoT has proven successful is the transport industry. Forrester predicts that fleet management in transportation will be the hottest area for IoT growth. With the IoT, third-party logistics providers can implement their ideas to improve their vehicle reliability and optimise their fleet’s utilisation. This is all thanks to connecting vehicles to the IoT ecosystem, providing actionable real-time insights. With connected vehicles, connected freight and connected fleet, businesses receive improved visibility of potentially hazardous goods, whilst also benefitting from the ability to improve predicted arrival and departure times.
These industry-specific applications of IoT are proof of the ways this technology can reduce costs whilst optimising productivity. They are also a great example of how the successful planning and implementation of this technology can allow industries dealing with large volumes of data to manage it safely and on a global scale.
It may sound obvious, but, it’s crucial to start by laying out a series of business goals and carefully defining objectives. Then, and only then, can businesses determine what they need to achieve in order to implement the appropriate IoT strategy.
With clear objectives in mind, businesses need to be putting these IoT strategies into place now. With industry 4.0 and IoT both taking off, it’s important for businesses to realise that the IoT demands a high level of investment, excellent specialists and the command of new technologies. Once implemented, in most IoT projects, the IoT loop will only be closed off by continuously monitoring and analysing all relevant Key Performance Indicators (KPIs). These KPIs can be tracked via a dashboard to make it simple to monitor the progress, as well as analyse the best ways to reduce costs and improve efficiency.
Although the term IoT has been floating around for a while now, to stay ahead and remain competitive, businesses must continuously be evolving their strategic plans in order for the business’ IoT projects to keep up with the demand and need of each area.
This involves furthering big data use cases, and taking into account how machine learning and AI might integrate with IoT plans to keep driving the business forward. With smart machines being better than humans at accurately and consistently communicating data, businesses need to look at how this technology will help them remain up to date and competitive.
New advancements in technology now allow Data Scientists to complete their work in modern “AI” languages, and with a touch of a button this can be converted into live and operational automation systems that feed off billions of events and many terabytes of data – it’s game changing for business.
At present, we are witnessing industries such as manufacturing leading the way when it comes to implementing innovative use cases for IoT and as such, they are reeping the benefits of driving their businesses forward successfully. However, as the technology continues to become more widespread and becomes applied across more areas and sectors, we are bound to see a number of exciting use cases across sectors as brands turn to IoT to turn the next big idea to a connected reality.
The importance of labelling cables in data centres - avoiding downtime, best practices, top labelling tips.
By Dymo.
The flow of data has never been greater than it is today with the effective and efficient management of data centres critical to meeting the demands of the digital economy. Well-designed and well-managed cabling systems enable efficient and quick exchanges of information, just as today’s data centre customers demand. For those operating and managing data centres, this is vital for keeping customers happy and ensuring any downtime – whether expected or not – is quickly resolved to become billable uptime.
According to Cisco’s Global Cloud Index (2015), global data IP traffic is set to grow at a rate of 25% by 2019 with data centre workloads within ‘traditional’ data centres set to more than double. The digitisation of business and the world at large has contributed to an exponential growth in data traffic, leading to increased pressure on the data centre. This has also increased pressure on the data centre managers, who are relied upon to keep modern businesses operating.
Order and disorder is the make or break of effective cable management. Without proper organisation, centre managers risk being inefficient and ineffective when a sudden problem arises. In complex infrastructures, which house hundreds of miles of wires and cabling, ineffective cable management can mean long, stressful hours spent hunting for errant wires and connections when issues arise. Therefore, it is vital to ensure that correct cable labelling is not neglected.
Data centre best practice rests on the planning, installation and ongoing management of the cabling network. During the planning phase, it is important to establish the data centre’s cabling requirements, with a number of different factors carefully considered including whether to install copper or fibre cabling, bandwidth requirements and the best structure for the cabling system.
The approach for cabling best practice needs to be integrated into all aspects of how the data centre is run, and involve a variety of relevant and qualified personnel from IT and facilities management as well as individuals responsible for cloud or network operations. During the installation phase, it is crucial to avoid error and ensure a well-managed process, as well as considering any necessary installation standards.
A data centre system is only as good as its installation, management and operation, and in this day and age, unplanned data centre downtime is viewed as unacceptable. Data centre managers can seek to eliminate or at the very least reduce the causes of downtime through the implementation of simple but effective strategies. One of the most common causes of unplanned downtime is unexpected repair, maintenance or upgrade work which can be avoided if the correct steps are followed during installation.
According to research from Uptime Institute, a US based advisory organisation focused on improving business critical infrastructure, human and mechanical error is responsible for 88 per cent of power outages in businesses, therefore cable management plays a major role in preventing unplanned failures[1]. Having organised cables will ensure that there is minimal damage to the wires as well as the machines. Any tangled mess becomes a huge obstacle to resolving problems, not to mention can interfere with the airflow into computers, which can cause other future problems.
Without intelligent labelling, the chances of complications occurring are significantly heightened, plus recovery becomes a challenging task costing both time and money. A simple solution such as ready-made label templates can maintain organisation in a simple yet effective manner, transforming a complex system into an easy to navigate arrangement.
Label mismanagement in the data centre can cause major disruptions and unnecessary delays resulting in loss of revenue and reduced productivity. Google estimated that its outage in 2013 caused global web traffic to plummet 40% and the loss of $545,000, and that was just five minutes. Without effective cable labelling, identifying and initiating correction measures to solve the problem could have taken hours, resulting in vast, avoidable losses.
Any business could see the same effects of technical malfunctions. Without correct labelling, a complex system becomes incredibly challenging for engineers to navigate and trace the source of the issue, which also becomes time-consuming and expensive. A well-documented, clearly labelled system is easier to update and repair, which equally results in lower maintenance cost.
As well as the commercial implications of a poorly maintained system, there are also safety implications. Breaching regulations can result in the injury of staff or a financial impact such as a substantial penalty fine. In this respect, there is no room for informality and carelessness.
In the workplace it is essential for managers to abide by regulations and ensure that machinery and personnel are protected from potential faults and safety breaches. To ensure that employees are not harmed by faulty wires it is necessary to keep cables organised, as not only does this protect the wires from being damaged but it also prevents people from injuring themselves.
It is important to make sure that labelling cables in the workplace remains a simple task. Access to an efficient and reliable labelling system can help to ensure the safety of the workplace and prevent any costly health and safety repercussions. Ease is important when you consider the regulations and standards around labelling requirements and colour coding.
In order to do this, all that is needed is a simple labelling system such as the handheld DYMO XTL. This device increases efficiency as it allows you to upload data from an excel spreadsheet and print a number of labels in one go without having to type each individual one, a feature certainly appreciated by anyone managing a rack of patch panels in a date centre. Additionally, hundreds of pre-loaded label templates are available to further simplify and accelerate the task, saving precious hours and minimising mistakes.
The ramifications of cable disorder cannot be underestimated and yet can be easily deterred by delivering intelligent cable management and effective cable labelling. Failing to properly label cables is inevitably costly to data centre operations as it can contribute to interruptions and prolong downtime. By establishing a solution to cut down on time lost addressing operational hindrances, it will enable a focus on the important tasks with minimal distractions.
[1] Uptime Institute Journal, available at: https://journal.uptimeinstitute.com/data-center-outages-incidents-industry-transparency/
The internet of things (IoT) is often discussed in terms of broad applications. We talk about smart cities and connected homes as if the IoT should be a simple case of plug-and-play. But for businesses, this approach is too haphazard.
By Wojciech Martyniak, M2M / IoT Product Manager, Comarch .
Every vertical, and every business, has a different set of requirements and restrictions. They have unique problems that IoT, and the data collected from the devices, can help solve.
The global IoT market is predicted to reach €250bn by 2020, but it will only reach that figure if providers focus on smart implementation. Providers must address true business need, which means businesses must know what they need from IoT before starting the implementation process.
The true driver of IoT growth lies in the provider’s ability to meet the needs of industries and organisations. Standardised solutions often don’t meet the needs of specific verticals, in fact these requirements are often so specialised that providers need to deliver additional services as part of the package.
For example, a telco provider getting involved in IoT delivery may provide connectivity, hardware and software, but some customers are also asking for help with advanced diagnostics, data analytics and a strong service level agreement.
Technology may be a strong selling point, but what customers really care about are results. Will implanting IoT devices and systems solve the issues that they have? This is what providers need to focus on, and it’s why the development of IoT depends on vertical, rather than horizontal, implementation.
When we look at what companies are currently doing with IoT it is very specific, and very diverse. Every industry from healthcare to manufacturing and even to conservation can benefit. But only if the business need is identified and this focus is applied to an implementation.
For businesses in the discrete manufacturing sector unscheduled downtime, inefficiencies in the way the line works and poor communication can all cause huge productivity problems.
One of the ways that connected devices can help is via predictive maintenance. By using a variety of cameras and sensors, and implementing data analytics solutions, plant managers can accurately predict when a machine will need repair. Using information provided by connected devices, they can plan for downtime and carry out the repairs while limiting the impact on the plant’s production schedule.
Cisco implemented an IoT solution for Stanley Black & Decker where the challenges being faced by the business included a lack of understanding about the effects of shift changes, real-time management of equipment effectiveness, and issues with productivity and efficiency in the operation of the plant.
After Cisco implemented its real-time location system, Wi-Fi and plant-wide Ethernet solutions, the business saw a variety of improvements, including a 24% increase in overall equipment effectiveness, an increase in throughput of approximately 10% and a reduction in inventory and material holding costs of 10%.
It saw these benefits because the business had identified potential issues prior to engaging Cisco. Stanley Black & Decker was then able to work with Cisco to use the connected devices where they would be most effective.
The shipping industry uses the internet of things for issues from ship tracking to controlling the temperature in cold cargo. It has specific issues around the often prohibitive cost of satellite communication and the difficulty of maintaining a connection long enough to send and receive data consistently.
Satellite and telecommunications company, Globecomm, manages the IoT system for a fleet of 300 container ships. By working closely with other providers, the firm uses sensors to monitor the temperature of refrigerated cargo. It also uses a system of satellites (covering 95% of the world’s shipping lanes) and its own platform, to transmit the data anywhere in the world, ensuring that the ships get minimal downtime and an efficient ROI. The system has the added benefit of providing mobile and data coverage for the on-board crew.
In this example, the shipping company knew the issue that needed to be resolved and Globecomm was able to make effective use of its solutions by tailoring its service to the fleet’s requirements.
Healthcare providers need connected devices to improve the efficiency and safety of patient care. The IoT is being used to provide real-time data on patients and helps collate data so that the whole medical team has access to an up-to-date analysis of how a patient is doing.
For example, doctors who treat patients with chronic conditions sometimes find that people forget to take their medication. The internet of things can help with that too. Proteus Digital Health has created an ingestible sensor that sits in the pill and sends a signal to an external sensor when the pill has been digested.
Conservation charities also have diverse needs. For example, some may want to track the migration patterns of animals in harsh conditions or remote regions, while others need to detect poachers.
Both HDOT and Alternating Phase are available through Server Technology's âBuild Your Own PDUâ configurator. Build Your Own PDU takes a Switched, Smart or Metered 42-outlet High Density Outlet Technology (HDOT) PDU and allows you to build an HDOT PDU your way in Four Simple Steps. Choose a configuration. Download a spec sheet and request a quote in four simple steps.
For example, Vodafone is working with wildlife charity, the Sea Mammal Research Unit, to track the health and movement of harbour seals. Devices attached to the animals use cellular networks and low powered wide area connectivity to send data to the scientists.
Providers need to ensure their network infrastructure can cope with these demands and provide a good SLA in circumstances where any delay could result in at best a loss of data, and at worst harm to endangered wildlife.
While needs of businesses are as diverse as businesses themselves, many are looking to the IoT to help them cut costs and run their enterprises with more efficiency. One study reported that 90% of U.S. based facility managers questioned expected to see improvements in productivity, profitability and sustainability after implementing IoT.
Telecoms firm, Telefonica implemented an IoT system for Spanish investment firm, Inversis using a suite of sensors throughout its 14 offices to help the business reduce energy costs.
Telefonica provided real-time reporting on energy use and compiled a detailed analysis of findings as well as its own recommendations on cost reduction. As a result, Inversis’ regional offices saw a 30% saving in energy costs, while the headquarters saved 25%.
A broad IoT deployment would be less effective in cases like this, because what the business really needs is insight. Providers need to get to know their clients, and what they need from the technology being deployed, if the IoT is to be truly effective.
Most service providers will be looking at the IoT to enhance the service they offer and help them distinguish themselves from their competitors.
In Germany, Telefonica is working with Sixt leasing car rental to make the lives of its customers easier. By renting a car and using it for work and personal reasons, drivers have to go through their journeys and mileage to work out what trips should be taxed.
Sixt uses an app provided by Telefonica which reads engine data directly, recording the mileage. The driver then uses the app to mark the journey as a business or personal trip. The result is an average €150 a month saving for drivers, and a more competitive offering for Sixt.
The internet of things generated €11bn in revenue for operators in 2016, but operators will only profit from this if they focus on business need rather than simple implementation.
Successful implementations of IoT rely not just on the provider’s level of service, but on the ability of the business to identify its own issues. What is the problem that needs addressing? What does success look like? These are questions that must be answered before an IoT solution is sought.
Operators can create and install sensors and gateway devices; they can create and maintain a robust communications network and even provide software and apps that collate and analyse the data collected, but by focusing on specific verticals (and on a micro-level, individual businesses) and what they need, IoT providers can ensure that their services are in demand.
How implementing a lifecycle approach to network security policy management speeds up application deployment, while strengthening security and compliance.
By Joanne Godfrey, director of communications at AlgoSec.
Sometimes, it seems that IT security teams just can’t win. They are judged on how they enable digital transformation initiatives and innovation, and are tasked with introducing new technologies to improve productivity and enable faster responses to market changes. But they’re also expected to safeguard the organization’s critical applications and data in an increasingly complex threat landscape – which means they’re often seen as an obstacle to innovation and business agility.
This is particularly true when it comes to provisioning business application connectivity. When an enterprise rolls out a new application or migrates an application to the cloud, it can take weeks or even months to ensure that all the servers, devices and network segments can communicate with each other, and at the same time prevent access to hackers and unauthorized users.
This is in part because the infrastructure of even a medium-sized enterprise can include hundreds of servers and network devices such as firewalls and routers – and the addition of virtualized and hybrid cloud architectures only serves to compound this complexity.
Then there is the never-ending cycle of application updates and changes. For every single change, network and security teams need to understand how it affects the information flows between the various firewalls and servers the application relies on, and change connectivity rules and security policies to ensure that only legitimate traffic is allowed, without creating security gaps or compliance violations.
What’s more, communication between business and technical stakeholders is often lacking. This isn’t too surprising: each group speaks its own language, with application teams talking about business-level requirements, while network and security teams need to understand traffic flows, IP addresses and protocols. Important information is siloed, with each group using its own tools for tracking business requirements, network topologies and security and compliance policies.
The result is that many enterprises take an ad-hoc approach to managing application connectivity: they move quickly to address the needs of high-profile applications or to resolve imminent threats, but have little time left over to maintain network maps, document security policies, or analyze the impact of rule changes on applications.
This haphazard approach contributes to delays in the release of applications, can cause outages and lost productivity, increases the risk of security breaches and acts as a brake on business agility.
However, it doesn’t have to be this way. IT security does not have to accept more business risk to satisfy the demand for speed. By managing application connectivity and network security policies through a structured lifecycle methodology, security teams can capture of all the major activities that should be followed when managing change requests that affect application connectivity and security policies. This in turn enables applications to be deployed quickly and securely. Let’s look at each of the 5 stages involved in implementing this lifecycle approach.
1. Discover and visualize
The first stage involves creating an accurate, real-time map of application connectivity and the network topology across the entire organization, including on-premise, cloud and software-defined environments. Without this information, IT staff are essentially working blind, and will inevitably make mistakes and encounter problems down the line. Security policy management solutions automate the connectivity discovery, mapping, and documentation processes for applications across the thousands of devices on networks - a task which is enormously time-consuming and labour-intensive if done manually. In addition, the mapping process can help business and technical groups develop a shared understanding of application connectivity requirements.
2. Plan and assess
Once the business has a clear picture of its connectivity and network infrastructure, it can start to plan changes more effectively: ensuring that proposed changes will provide the required connectivity, while minimizing the risks of introducing vulnerabilities, causing application outages, or compliance violations.
Typically, it involves translating application connectivity requests into networking terminology, analyzing the network topology to determine if the changes are really needed, conducting a proactive impact analysis of proposed rule changes (particularly valuable with unpredictable cloud-based applications), performing a risk and compliance assessment, and assessing inputs from vulnerabilities scanners and SIEM solutions. Automating these activities as part of a structured lifecycle process keeps data up-to-date, saves time, and ensures that key steps are not omitted – helping to avoid configuration errors.
3. Migrate and deploy
Deploying connectivity and security rules can be a labor-intensive and error-prone process. Security policy management solutions automate the critical tasks required, including designing rule changes intelligently, automatically migrating rules using intuitive workflows, and pushing policies to firewalls and other security devices – with zero-touch if no problems or exceptions are detected. Crucially, the solution can also validate that the intended changes have been implemented correctly. This last step is often neglected, creating the false impression that application connectivity has been provided, or that vulnerabilities have been removed, when in fact there are time bombs ticking in the network.
4. Maintain
Most firewalls accumulate thousands of rules which become outdated or obsolete over the years. Bloated rulesets not only add complexity to daily tasks such as change management, troubleshooting and auditing, they can also impact the performance of firewall appliances, resulting in decreased hardware lifespan and increased TCO.
Cleaning-up and optimizing security policies on an ongoing basis can prevent these problems. This includes identifying and eliminating, or consolidating redundant and conflicting rules; tightening overly permissive rules; reordering rules; and recertifying expired ones. A clean, well-documented set of security rules helps to prevent business application outages, compliance violations, and security gaps, and reduces management time and effort.
5. Decommission
Every business application eventually reaches the end of its life: but when they are decommissioned, their security policies are often left in place, either by oversight or from fear that removing policies could negatively affect active business applications. These obsolete or redundant security policies increase the enterprise’s attack surface and add bloat to the firewall ruleset.
The lifecycle approach reduces these risks. It provides a structured and automated process for identifying and safely removing redundant rules as soon as applications are decommissioned, while verifying that their removal will not impact active applications or create compliance violations.
Benefits of the lifecycle approach
The lifecycle approach enables organizations to structure their application connectivity management activities logically, which reduces risks by ensuring that the right activities are performed in the right order, consistently.
For example, failing to conduct an impact analysis of proposed firewall rule changes can lead to service outages when the new rules inadvertently block connections between components of an application. A lifecycle approach helps to ensure this does not happen.
By utilizing repeatable and automated processes organizations can respond faster to changing business requirements while reducing errors. These structured, documented processes also make audit preparation and compliance work much easier.
Finally, it facilitates improved communication and collaboration across IT groups and senior management. It helps bring together diverse application delivery, network, security, and compliance teams to ensure that infrastructure and security changes truly serve the evolving needs of the business – enhancing agility without introducing risks.
Many corporate CIOs are seizing on recent networking innovations as an opportunity to modernise their networking infrastructure. Their global networks must serve as superior application delivery platforms for geographically dispersed organisations.
By Paul A Ruelas, Director Product Management, Masergy.
That job is only getting more important in an age of business-critical, bandwidth-hungry applications, such as voice and video collaboration technologies, and as more solutions migrate to the cloud, affecting the traditional model of backhauling app traffic from branches to data centers or hub sites before connecting to the Internet.
SD-WAN is the future to address these enterprise network issues, as it enhances application performance with dynamic routing and assures reliability by aggregating data over multiple WAN connections. With seamless connectivity to cloud services, SD-WAN also can provide worldwide networks with lower-latency connections to key cloud applications.
The bigger question is what type of SD-WAN solution will be appropriate for a particular business. Some companies are just looking to get their feet wet rather than convert their entire network to this type of environment right away. They want to experiment with SD-WAN in some locations or experience key SD-WAN advantages in a small footprint solution that balances price and performance. Others are ready to dive in with a more robust solution for their entire enterprise that offers very fine-grained application control and ultra-high performance.
It’s all about the applications that drive digital transformation in every enterprise. They’re the force for creating more revenue and improving customer experiences, as well as expanding market presence. Enterprise WANs need to be more dynamic to support that, and SD-WAN is the gateway to an adaptive network that optimises cost and availability as well as performance.
With SD-WAN, companies can leverage centralised policy management and orchestration to use the most appropriate type of networking connection to meet particular application performance requirements as well as efficiently cope with changes to those requirements.
Evidence that the market is moving forward comes from IDC, which reports that SD-WAN revenues are expected to exceed $6 billion in 2020’ A Kable Global ICT Customer Insight study last year of more than 2,600 enterprises across all regions also showed that close to 60% planned to adopt SD-WAN within the next two years.
SD-WAN is a relatively new way of designing and deploying wide area networks to meet business and application performance requirements. Underpinned by a software-defined network (SDN) platform -- which can turn enterprise networks into modular, scalable assets that can be assembled and rearranged as business needs require -- SD-WAN technology enables companies to implement highly elastic and transport-agnostic connections to branch offices and remote locations using MPLS, commercial broadband Internet, Dedicated Internet Access and LTE services.
SD-WAN makes operating your network more efficient than ever before with real-time analytics, and when equipped with application-based routing, it provides IT organisations greater control over quality of service. Ideally, any SD-WAN incarnation will include as core features zero-touch provisioning, policy-based routing, centralised orchestration, enhanced security, WAN optimisation, active-active circuits and connectivity to cloud partners.
When implementing any new technology, there can be unanticipated execution risks and expenses. But there’s also a risk in waiting until a new technology is completely mature – that is, that you’ll be late to reap its benefits, which in this case includes making maximum use of the bandwidth you’ve already invested in.
My suggestion is to implement a fully managed SD-WAN solution to mitigate the risks and expenses associated with deploying any new technology on your own. Use the expertise of a proven service provider whose offering doesn’t lock you into a single hardware or technology approach, which can prohibit interoperability with your business’ broader WAN platform. The right service provider will do the heavy lifting for you to ensure rapid deployment, simplified change management, and real-time analytics and service control. You’ll also be able to eliminate moany of the high CAPEX costs associated with proprietary network appliances.
There may be of course, some drawbacks to some SD-WAN solutions. As I mentioned in the previous answer, one of them is to make sure the solution you choose doesn’t lock you into working with a particular vendor that rules out mixing and matching solutions. Seamless interoperability with other vendors’ circuits and WANs and the ability to connect any location over any transport method are fundamental functions of a truly flexible and risk-free SD-WAN.
It’s also important for buyers to realise that in most cases, SD-WAN isn’t a complete MPLS replacement in favor of best-effort broadband. Upload speeds may be constrained when using SD-WAN purely over broadband Internet.
Setup and maintenance of centralised policy management and zero-touch provisioning can be complicated, too. Hence the argument for leveraging managed SD-WAN services.
Managed SD-WAN services let customers reap the benefits of SD-WAN while reducing the risk of an “all-or-nothing” solution. They give companies the ability to securely route traffic across the public Internet, dedicated broadband, and private links for the greatest flexibility and deliver solid performance that ensures reliability through highly developed global backbone networks.
Keep in mind the features I mentioned above as key parts of strong SD-WAN solutions (zero-touch provisioning, policy-based routing, etc.). Embedded network analysis and reporting that delivers single-pane-of-glass visibility into the network’s entire health status is also a plus. If buyers are in the market for the highest performance SD-WAN environments, add to these capabilities forward error correction and application-based routing.
And again, I can’t repeat enough the value of looking to fully managed SD-WAN solutions to avoid the struggles of DIY implementations. This way, deployment is quick; time-consuming administrative functions are removed from IT staffers’ hands; and risks are minimised. According to Frost & Sullivan, nearly one-third of companies would prefer to buy a managed SD WAN solution from an NSP or an MSP.
I’d also like to share this good news: At Masergy we conducted an extensive, 2-year testing of several commercially available SD-WAN solutions and found that while SD-WAN is still evolving, it is a valuable, performance-enhancing technology when applied appropriately.
One size fits all content management solutions are falling short of meeting modern business requirements. Enterprise content management (ECM) systems may feel like a relatively recent development but the reality is that they lack the necessary capabilities required in increasingly complex infrastructures. A fresh approach is clearly required.
By Brendan English, VP, LOB, Content Solutions, ASG Technologies.
While some enterprises have been creative in their approach to meet these requirements, their quick-fix solutions have only resulted in further complication. Initiative has led businesses to take a tactical rather than strategic approach, but those businesses now find themselves struggling to manage amalgamations of overlapping platforms combining to create abstract infrastructures.
A recent study commissioned by ASG revealed that 93 percent of IT enterprise architecture and operations decision makers involved in content management use multiple repositories to store content, with 31 percent using five or more systems to manage it. The study, conducted by Forrester Consulting and titled “Today’s Enterprise Content Demands a Modern Approach”, finds that enterprises would greatly benefit from standardisation on a single ECM solution.
It’s easy to understand why the Forrester Consulting study has come to the conclusion that a new approach is needed. The statistics tell us that beyond the buzz surrounding ‘ the data explosion’, volumes of unstructured data in the form of business content such as office documents, presentations, spreadsheets and rich media are growing beyond expectations.
The majority of organisations (60 percent) are storing 100 terabytes (TB) or more of unstructured data; nearly one quarter (23 percent) have one petabyte (PB) or more of data. Further, these volumes show no sign of decreasing; 82 percent of those polled reported an increase in unstructured data stored over the past 24 months with 50 percent saying volumes have increased by more than 10 percent over this time.
With this content comes the ‘easy access versus security’ conundrum. The information is worthless unless employees (and often customers or partners too) can reach it quickly as they need it. Today, this means making content available via mobile as well as via traditional channels. Yet exchanging information with external parties and doing so remotely, outside a company’s firewall, can heighten regulatory and security concerns.
Underpinning these concerns is another major worry for IT professionals with nearly three out of ten saying they are challenged by legacy systems. A quarter say their ability to move to the cloud is hampered by their existing infrastructure. Yet the next two years will be a transition period for enterprise content management deployment methods with monolithic ECM suites giving way to cloud-based platform. In the Forrester Consulting study, a decisive 90 percent of organisations said they will be using cloud-based content management systems, either as a primary or hybrid approach. Will the inflexibility of legacy systems prevent these enterprises from taking advantage?
In reality most major enterprises are unable to make a ‘rip and replace’ move to the cloud. Too much has been invested in customising and maintaining legacy systems over the years. However, all enterprises are facing stiff competition from start-up companies that are entirely cloud-based and therefore have greater agility to respond to market demands and changes.
For example, the US Insurance company Liberty Mutual decided that the greater flexibility and access offered by cloud delivery options made the change worthwhile. Choosing a next generation content services platform which accesses and manages content from anywhere on any device and also has the capacity to work in a hybrid environment, it was able to migrate while supporting both the old and the new. The platform became the archive content repository for formatted documents on Amazon Web Services (AWS).
Because it supports open source, the platform allows Liberty Mutual to bridge from mainframe, multiple legacy repositories to AWS. It is deployed on AWS as a Platform-as-a-Service to capture process-driven content and on premise or in hybrid environments.
This hybrid approach enables enterprises to manage content for regulatory compliance while still offering the benefits of the cloud, seamlessly integrating on-premise and cloud repositories for maximum flexibility, easier IT management and lower costs, achieving fast ROI on both legacy and IT systems.
Content is the lifeblood of any organisation. When it is difficult to access, decision-making suffers. So whatever the patchwork of systems used, enterprises are seeking tools that enable them to access, view and use content from a single app, regardless of location. In addition, as cloud content services become pervasive, enterprises must still cope with the massive content stores that remain on premise. Any technology that helps bridge the cloud/on premise gap will help businesses gain value from all their IT including their legacy systems.