It’s easy to obtain the impression that the road to digitalisation requires a wholescale ‘trashing’ of traditional technologies and approaches to business, and the exclusive use of new, smart thinking. While there’s no doubt that the journey to becoming a digital business is much easier achieved with a blank piece of paper, than a whole stack of legacy kit and mind-sets, whatever the starting position, the foundation of any 21st century organisation remains a reliance on traditional IT components – compute, networks and storage.
And these physical cables and boxes require physical, controlled locations – data centres. The technologies on which this infrastructure is based are changing, and the way in which these technologies are delivered to the customer is also changing, but there’s no getting away from the importance of the ‘plumbing’ – indeed, it’s never been as important, in an age where any bottleneck can be the difference between failure and success.
How the old and the new are integrated and optimised is the challenge for end users as we enter 2017, and this issue of DCS gives a great flavour of this – with articles on power distribution/management, storage and the Cloud, and software-defined networking.
Happy New Year to one and all!
New research from Bain & Company and Red Hat indicates that many traditional companies are at an early stage in their digital journey; leaders stand out based on their use of advanced technologies, such as cloud computing, advanced analytics and modern app development.
Bain & Company and Red Hat have released the results of joint research aimed at determining how deeply enterprises are committed to digital transformation and the benefits these enterprises are seeing. The research report, For Traditional Enterprises, the Path to Digital and the Role of Containers, surveyed nearly 450 U.S. executives, IT leaders and IT personnel across industries and found that businesses that recognize the potential for digital disruption are looking to new digital technologies – such as cloud computing and modern app development – to increase agility and deliver new services to customers while reducing costs. Yet, strategies and investments in digital transformation are still in their earliest stages.
For those survey respondents that have invested in digital, the technology and business results are compelling. Bain and Red Hat’s research demonstrates that those using new technologies to digitally transform their business experienced:
Despite the hype, however, even the most advanced traditional enterprises surveyed still score well below start-ups and emerging enterprises that have embraced new technologies from inception (digital natives). According to the survey results, nearly 80 percent of traditional enterprises score below 65 on a 100-point scale that assesses how these organizations believe they are aligning digital technologies to achieve business outcomes. Ultimately, the report reveals that the degree of progress among respondents moving towards digital transformation varies widely, driven in part by business contexts, actual IT needs and overall attitudes towards technology. It also uncovers some common themes in the research.
For example, Bain and Red Hat found that while 63 percent of enterprises surveyed have built processes to respond to digital trends, only 19 percent see rapid innovation as a priority. Additionally, for approximately 65 percent of survey respondents, the primary motivation driving their digital efforts is moves made by their competition, highlighting a highly reactive approach to digital transformation.
“We see many traditional enterprise companies still trailing on measures of digital maturity, even among the most advanced firms,” said Jeff Taylor, a partner in Bain’s Technology Practice and co-author of the report. “As we took a deeper look at these companies surveyed, we saw that those advancing further and faster on the adoption curve treat digital as more than just a singular function or activity. They view it as a comprehensive, cross-functional transformation, implementing changes across their leadership, organization, product development approach and processes, IT strategy and investments, data governance and tools, etc. Building sufficient digital capabilities that will generate the desired results is not a straightforward journey. Success requires a sustained multi-year focus.”
As companies progress on their digital adoption journey, they typically invest in increasingly more sophisticated capabilities in support of their technology and business goals. The use of modern application and deployment platforms represents the next wave of digital maturity and is proving to be key in helping companies address their legacy applications and infrastructure.
Containers are one of the most high-profile of these development platforms and a technology that is helping to drive digital transformation within the enterprise. Containers are self-contained environments that allow users to package and isolate applications with their entire runtime dependencies - all of the files necessary to run on clustered, scale-out infrastructure. These capabilities make containers portable across many different environments, including public and private clouds.
In the course of their research, Bain and Red Hat found that enterprises using containers are beginning to realize material architectural benefits. Based on the analysis of survey respondents, initial container adopters could realize a 15 to 30 percent reduction in development times, and additional infrastructure flexibility gains driven by the portability benefits of containers. The results of the survey also found cost savings of five to 15 percent from hardware productivity are possible.
“Containers are on track to play an important role in not only application development for highly automated, scale-out platforms, but also in helping to drive the modern enterprise forward on their journey towards digital transformation. While most survey respondents – about 80 percent – are currently using containers primarily for web apps, we expect to see this start to shift to include more mission-critical workloads,” said Tim Yeaton, senior vice president, Infrastructure Business Group, Red Hat. “In that time, we anticipate a growing set of more ‘traditional’ workloads will be prioritized for containerization, including custom applications, databases and business intelligence.”
The proportion of respondents selectively or broadly adopting container technology tied to benefits and complementary trends is expected to grow across all application lifecycle phases – from about 20 percent currently to upwards of 40 percent over the next three years.
While the opportunities created by these emerging technologies are compelling, the speed and path of adoption for containers is somewhat less apparent, according to the Bain and Red Hat report. The biggest hurdles standing in the way of widespread container use according to respondents are common among early stage technologies – lack of familiarity, talent gaps, hesitation to move from existing technology and immature ecosystems – and can often be overcome in time. Vendors are making progress to address more container-specific challenges, such as management tools, applicability across workloads, security and persistent storage, indicating decreasing barriers to adoption.
Gartner forecasts that IT spending in Europe, the Middle East and Africa (EMEA) will total $1.25 trillion in 2017, a 1.9 percent increase from 2016. IT spending across all the constituent regions of EMEA will be almost flat in 2016, increasing 0.6 percent year on year.
Across all the countries of EMEA, spending on devices is expected to decline and to be the main contributor to an overall slowdown in IT spending in 2016. The segments that will contribute most to overall IT spending growth in 2017 are software and IT services.
"Spending on digitalization is on the rise in EMEA, and we’ll witness some leading organizations modernize their core IT systems and increase spend on software and services in particular, as part of their digital transformation," said John-David Lovelock, research vice president at Gartner.
Spending on data center systems, particularly servers, would normally grow when spending on software increases, but with the growing adoption of software as a service (SaaS) and other cloud offerings, data center spending will be more muted than usual.
Table 1. EMEA IT Spending Forecast (Millions of Constant U.S. Dollars)
| 2016 Spending | 2016 Growth (%) | 2017 Spending | 2017 Growth (%) |
Data Center Systems | 58,163 | 1.6 | 58,953 | 1.4 |
Software | 112,244 | 6.0 | 119,842 | 6.8 |
Devices | 206,238 | -3.7 | 204,660 | -0.8 |
IT Services | 327,676 | 3.8 | 341,133 | 4.1 |
Communications Services | 526,782 | -0.9 | 530,397 | 0.7 |
Overall IT | 1,231,103 | 0.6 | 1,254,986 | 1.9 |
Source: Gartner (November 2016)
Brexit to Have Most Impact in Western Europe
The Brexit effect that is most pronounced is the decline in the pound sterling. Its decline has caused the prices within the U.K. for many IT products to increase in 2016.
"When the prices of goods increase, consumers and businesses shift their buying patterns, and the simple reaction is to buy less well-featured products," said Mr. Lovelock. "But now that there are viable cloud offerings in the U.K., organizations are also able to shift their spending into different areas — to buy computing as a service, instead of servers. These shifts will play out further in 2017. "Banks in France and Germany have increased their spending on software and consulting in 2016 to attract, or at least be ready for, any banking activity shifting away from London."
IT Spending in Western Europe Will Pick Up in 2017
In Western Europe, IT spending in constant U.S. dollars is forecast to total $803.5 billion in 2017, a 1.6 percent increase from 2016 (see Table 2). IT spending levels in Western Europe are likely to be essentially flat in 2016, with growth of 0.2 percent year over year.
Table 2. Western Europe IT Spending Forecast (Millions of Constant U.S. Dollars)
| 2016 Spending | 2016 Growth (%) | 2017 Spending | 2017 Growth (%) |
Data Center Systems | 42,274 | 1.8 | 42,679 | 1.0 |
Software | 91,952 | 5.3 | 97,641 | 6.2 |
Devices | 113,701 | -5.5 | 110,899 | -2.5 |
IT Services | 292,265 | 3.8 | 304,281 | 4.1 |
Communications Services | 250,521 | -3.1 | 248,021 | -1.0 |
Overall IT | 790,712 | 0.2 | 803,521 | 1.6 |
Source: Gartner (November 2016)
Devices and Communications Services Spend to Fall in Western Europe
While IT services will remain the largest segment in terms of spending, and will continue to grow in 2017 (by 4.1 percent), the devices and communications services segments are likely to decline for at least the next three years.
"Mobile phone adoption is nearly at a saturation point — almost all users who want a new phone already have one," said Mr. Lovelock. "The mobile phone market has therefore shifted to a replacement cycle, and mobile phone prices have reached a plateau. This compounds the problems of communications service providers, who are having to compete more directly on price, by providing more services for the same amount and offering discounts on existing plans."
Gartner forecasts the PC market in Western Europe to total 47.8 million units in 2016 and to decrease by 3 percent in 2017. Gartner expects PC prices in the U.K. to increase by less than 10 percent in 2017 as vendors look to "de-feature" their PCs to keep prices down and take advantage of the single-digit decline in PC component costs in 2016.
Smartphone sales in Western Europe will total 147 million units in 2016, a 1 percent increase from last year. Gartner projects smartphone sales to increase 4.7 percent in 2017. Gartner analysts also expect that in 2017 more players (mainly from China) will aggressively target the "affordable" premium range, as well as improve basic smartphones, helping overall smartphone replacement volumes in 2017.
In the third quarter of 2016, worldwide server revenue declined 5.8 percent year over year, and shipments declined 2.6 percent from the third quarter of 2015, according to Gartner. Among the top five vendors, only Cisco increased revenue in the third quarter, while Huawei and Inspur Electronics saw growth in shipments. HPE, Dell and Lenovo all experienced declines in both server revenue and shipments.
"The server market was impacted during the third quarter of 2016 by generally conservative spending plans globally. This was compounded by the ability of end users to leverage additional virtual machines on existing x86 servers (without new hardware) to meet their server application needs," said Jeffrey Hewitt, research vice president at Gartner. "Server providers will need to reinvigorate and improve their value propositions to help end users justify server hardware replacements and growth, if they hope to drive the market back into a positive state."
All regions showed a decline in shipments except Eastern Europe, which posted growth of 0.9 percent. In terms of revenue, all regions except for Japan experienced a decline. Japan grew by 1.3 percent.
x86 servers declined 2.3 percent in shipments and 1.6 percent in revenue in the third quarter of 2016. All vendors in the top five except for Cisco experienced a decline in revenue. In x86 server shipments, only Huawei and Inspur Electronics experienced growth.
Despite a decline of 11.8 percent, HPE continued to lead in the worldwide server market, based on revenue, with 25.5 percent market share. Dell declined 7.9 percent, but maintained the second spot in the market with 17.5 percent market share. Lenovo secured the third spot with 7.8 percent of the market. IBM dropped to the fifth position and experienced the largest decline among the top five vendors.
Table 1
Worldwide: Server Vendor Revenue Estimates, 3Q16 (U.S. Dollars)
Company | 3Q16 Revenue | 3Q16 Market Share (%) | 3Q15 Revenue | 3Q15 Market Share (%) | 3Q15-3Q16 Growth (%) |
HPE | 3,247,087,045 | 25.5 | 3,682,417,477 | 27.3 | -11.8 |
Dell | 2,227,185,685 | 17.5 | 2,419,231,403 | 17.9 | -7.9 |
Lenovo | 994,447,261 | 7.8 | 1,065,664,119 | 7.9 | -6.7 |
Cisco | 929,440,000 | 7.3 | 885,600,000 | 6.6 | 5.0 |
IBM | 889,723,595 | 7.0 | 1,327,761,197 | 9.8 | -33.0 |
Others | 4,426,866,909 | 34.8 | 4,120,053,348 | 30.5 | 7.4 |
Total | 12,714,750,495 | 100.0 | 13,500,727,543 | 100.0 | -5.8 |
Note: Beginning in the second quarter of 2016, HPE's server sales in China are reflected in H3C.
Source: Gartner (November 2016)
HPE secured the No 1 position in server shipments in the third quarter of 2016, with 18.3 percent of the market (see Table 2). Despite a decline of 9.8 percent, Dell secured the second spot with 16.8 percent market share. Huawei and Inspur were the only vendors in the top five to increase server shipments in the third quarter of 2016.
Table 2
Worldwide: Server Vendor Shipment Estimates, 3Q16 (Units)
Company | 3Q16 Shipments | 3Q16 Market Share (%) | 3Q15 Shipments | 3Q15 Market Share (%) | 3Q15-3Q16 Growth (%) |
HPE | 493,268 | 18.3 | 613,101 | 22.2 | -19.5 |
Dell | 452,383 | 16.8 | 501,262 | 18.1 | -9.8 |
Lenovo | 228,097 | 8.5 | 242,005 | 8.8 | -5.7 |
Huawei | 163,355 | 6.1 | 134,163 | 4.9 | 21.8 |
Inspur Electronics | 119,943 | 4.5 | 99,417 | 3.6 | 20.6 |
Others | 1,234,567 | 45.9 | 1,172,725 | 42.4 | 5.3 |
Total | 2,691,613 | 100.0 | 2,762,672 | 100.0 | -2.6 |
Note: Beginning in the second quarter of 2016, HPE's server sales in China are reflected in H3C.
Source: Gartner (November 2016)
In recent years, datacentre operators and owners in temperate and even not so temperate climates have faced a dilemma: Is it safe to abandon capital-intensive mechanical cooling in favour of free cooling, which is much cheaper and greener to run? By Andy Lawrence Research Vice President, Datacentre Technologies (DCT) & Eco-Efficient IT and Daniel Bizo, Senior Analyst, Datacentre Technologies, 461 Group.
The decision is complex, involving failure rates, the cost of failures, the cost of equipment and the likely behaviour of the weather for the lifetime of the datacentre. Every case is different, and there is often no right answer – although the use of add-on incremental or auxiliary cooling (fractional cooling) is a compromise.
However, we have conducted detailed analysis of failure rate data, using publicly available figures from ASHRAE (American Society of Heating, Refrigerating and Air-Conditioning Engineers), which has access to manufacturer failure rate data. ASHRAE’s research and advice on operating environments has heavily influenced equipment, datacentre design and operating procedures for decades. Although the data has been widely available for at least five years, it has not been widely analysed or considered in depth.
Contrary to industry practice, it reveals that most datacentre operators can safely and economically allow temperatures to float freely across a full and wide range of temperature envelopes (per ASHRAE guidance) with no increased failures at most locations around the world. And it is exactly in these periods when the mechanical equipment is currently most deployed, and is therefore commercially justified.
The bottom line is that, under economic pressure, increasingly more datacentres will control temperatures more loosely and use less mechanical cooling due to the reality of datacentre economics, even if such changes appear to move at tectonic speeds.
There are many reasons other than IT failures for controlling temperatures at tightly set levels, including airflow issues, growing energy consumption, contractual obligations or health and safety. However, it is very likely that many datacentre designers and operators – especially those concerned about cost or planning new builds – need to reconsider why and how they cool their datacentres, if they have not already done so.
There is a long-established scientific consensus that higher temperatures shorten the lifespan of both electrical and mechanical equipment. This understanding has encouraged datacentre operators to keep temperatures in the data hall low, and within a narrow range. In the past, when IT equipment was expensive and fewer in numbers, this careful approach made sense. However, when ASHRAE first published its failure rate guidance, the so-called x-factor – based on evidence of failures in the field– it showed that the effect of temperature had been probably overstated.
As part of our ongoing research of datacentre cooling technologies and strategies, we have re-analysed the x-factor to better understand the trade-offs that operators are facing. The question centres on cost and strategy within accepted ranges: Is it really cheaper to keep systems cooler to reduce failure rates, or to let temperatures rise within the tolerated range? These trade-offs between expensive mechanically cooled systems and using outside air for cooling are altered by the likelihood of failures at given temperatures.
Our analysis has found that, rather counterintuitively, if datacentre operators allow temperatures to float freely across the full range of allowable (class A1) temperature envelopes, in many cases, failures will actually fall when efficient indirect adiabatic and evaporative economizers are used.
A key implication of the finding is that, in the overwhelming majority of North American and European datacentre markets, operators can meet design criteria when adopting allowable envelopes by using highly efficient indirect adiabatic evaporative cooling systems. Even in hot and humid locations (but not tropical), datacentre operators can always meet design criteria with fractionally sized mechanical units for helping out in peak summer conditions.
As a very broad generalisation, the research suggests that most datacentres can be designed using fewer or no expensive mechanical chillers and compressors, and that those suppliers offering free air economiser systems should benefit from a trend toward greater use of free cooling. Many operators should also be able to operate with lower PUEs (i.e., using less energy on cooling).
The report considers some of the cooling systems and vendors that are benefiting from the trend toward air economisation, and will likely benefit further if the ASHRAE x-factor is fully taken into account by designers. But the report also points out that air economisers are still fairly novel, in spite of adoption by high-profile operators, so prices are currently still high.
By Steve Hone, CEO of the DCA, Data Centre Trade Association
Well, I think most would agree that this year has been far from boring. Whether on the world stage or within our own sector, there was certainly enough to keep everyone fully occupied. As for the Trade Association which exists to support you there was definitely no time to even draw a breath in what has been a roller coaster of a year. Here is just a quick recap in a nutshell…
Wow, the conference season was really jam packed this year! Events supported and promoted by the Trade Association included: DCN, DCSS, DCW, DCE, DCI, and DCD. The DCA has also co-hosted additional separate knowledge sharing events in Milan, Amsterdam, Riga, Frankfurt, Paris and Dublin this year. Finally, and by no means last, the DCA hosted its very own conference in Manchester. The Data Centre Transformation Conference is organised for the DCA by DCS in association with Angel Business Communications. Year 2016 saw a completely new workshop format for the event which was refreshingly different from the more ‘traditional’ conference format we are all used to. It was a great success and very well received both by delegates and sponsors alike; the same format is planned for 2017 which I am sure will build on what is clearly a winning format.
The DCA has continued to grow its collaboration with its many media partners of the past 12 months. The DCA now regularly publish original member content in 5 of the top 7 channels ensuring your thought leadership articles reach out to over 120,000+ subscribers in print and electronic format. New media initiative in 2017 are planned to reach out to even more end users in the year ahead so lots to look forward to.
Progress continues at a pace on the EURECA EU Commission project. By way of a reminder this is a 30 month project which is designed to empower and assist the public sector to identify and procure environmentally sound energy efficient data centre products and services. This includes benchmarking tools, business case support and consultative advice. All DCA members, wherever you are located in Europe, can benefit from involvement. Ensuring your organisation is listed on the public sector supplier directory, which will comes online early in the New Year, would be a great start. Support is also on hand for you as suppliers to further help strengthen the business case on existing Public Sector projects you are working on; which could help to quickly move them from a prospect to win status; so for more information visit www.dceureca.eu or contact a member of the DCA team to find out more.
One of the primary roles of any Trade Association is to support its members and the sector as a whole, and ensure that members are empowered to effect positive change as a collective. These specialist interest steering groups which are organised by the DCA throughout the year are designed to help effect change and/or educate the market on everything from energy efficient best practice, to workforce development to ensure the continued health and sustainability of our sector.
There continued to be plenty of activity on the Standards and Recommended Best Practice front in 2016. The DCA recognises that it not always practical for you to attend workshops in person so the DCA team has made sure that members interests have been fully represented at all major workshops and group meetings throughout the year, including the continued development of Cen Cenelec EN50600 suite of European Data Centre Standards, The EU Code of Conduct, the proposed ISO KPIs for PUE, WUE etc. and the EMAS Initiative in conjunction with the JRC (Joint Research Council).
The DCA’s Membership Manager, Kelly Edmond, embarked on setting up another charity fundraising campaign in 2016 which was the London to Brighton cycle ride in aid for Bike 4 Cancer. Fellow colleagues from the data centre industry joined her on this journey and managed to raise nearly £3,000 for the charity! Great congratulations and thank you to all that took part, supported and donated throughout, it meant so much.
The DCA Trade Association continues to grow in size and influence and I am delighted to announce that we have two new members of staff joining the DCA team to support you. Kieran Howse will be joining the team as Member Engagement Executive and Amanda McFarlane will be joining us in January as Marketing/PR Executive. The new roles demonstrate the DCA’s continued commitment to its members and the sector as a whole and I hope you will all join me in welcoming them both to the team.
As of the 1st January I am also pleased to announce that the DCA’s main office, which has been based in Newbury for the past 6 years, will be relocating down the road to the new office in Marlborough. The new location will provide far more scalability moving forward as the DCA continues to grow and due to this growth the New Year will also see a new member’s portal which is something I will have been working on over the Christmas break, in between too many mince pies and leftover turkey.
That just leaves me with one task left to perform on behalf of the DCA team and that is to extend a massive thank you for all your support over the past year. I am hoping everyone has managed to recharge their batteries over the festive holidays and here’s wishing you all a happy and very successful 2017.
Steve Weiner, Senior Lead Product Manager, Global Colocation Services, CenturyLink
2016 has been a time of great change in the data centre industry, with many business, regulatory and technical developments set to have a positive and significant effect on how organisations will operate in 2017 and beyond.
In the past year, there have been a number of mergers and acquisitions, including CenturyLink’s recent sale of its datacentre and colocation business to BC Partners and Medina Capital (CenturyLink will retain a minority stake). Organisations may worry that this consolidation will drive out competition between data centre suppliers, but not only are there still a large number of smaller owner-operators in the market, but economies of scale, variety and availability will undoubtedly provide a spectrum of different services and price points depending on needs.
The key factor for many end-user organisations is connectivity – indeed, there have been many partnerships this year between cloud vendors, allowing direct connectivity between cloud estates. This development cements the future of cloud and enables organisations to ensure rapid access to data, technological agility and fast scalability.
The regulatory environment has been in flux in 2016, with Privacy Shield replacing Safe Harbor. This decision was not an easy or straightforward one. In April, the Article 29 Data Protection Working Party raised a number of concerns around factors such as the right to object and automated decisions made by the Privacy Shield infrastructure. There are still many discussions regarding whether Privacy Shield will be adequate – and the working party will be reviewing the progress of Privacy Shield annually, so we expect to see further coverage of the matter in April 2017.
Furthermore, we expect to see a significant amount of focus on the human side of the data centre next year. There has already been a large amount of progress in terms of establishing a baseline of skills and standards for data centre operational staff in the Uptime Institute’s M&O certification. However, the industry has room for a greater adoption of standards. 2017 will continue to see a focus on skills, competencies and how human staff manage the mission critical environments.
Within the technical arena, we anticipate that 2017 will see a number of environmental developments in the data centre. For example, although a lot of progress has been made, we expect to see a number of energy efficiency developments, such as how temperature is handled in the data centre. There have already been considerable gains in terms of ambient air cooling, temperature ‘zoning’ and other developments, but there is still considerable room for growth. Similarly, there have been a number of interesting developments for data centre security which will no doubt have a positive impact on how data centres are run. For example, physical and virtual technologies are emerging and in use which can help data centre operators to reduce the amount of caging in the raised floor environment but still allow customers to meet compliance and regulatory demands.
Looking further ahead, it would be good to see further development and leverage from the potential that DCiM (Data centre infrastructure management) offers, when thinking in terms of the data centre environment, the IT and automation. DCiM still has a lot of untapped potential. There is significant scope for data centres to play a key role in the evolution of smart cities.
After all, there have already been a large number of exciting changes in how information within urban environments is gathered, helping to improve aspects such as traffic flow efficiency, pollution control and urban area usage. Data centres – particularly ‘edge’ facilities – are key in processing this information, and it is certain that smart cities and the internet of things will be significant drivers of space usage in data centres of the future.
In short, 2017 promises to be a very eventful year!
By Ashish Moondra, Senior Product Manager for Power, Electronics and Software at Chatsworth Products (CPI).
The global PDU market is forecasted to grow 5.6 percent year-over-year, reaching almost 1 billion by the end of 2016, according to a recent power distribution (PDU) study by research and analysis firm IHS.
Higher virtualization, equipment consolidation (higher rack power density) and more efficient computing mean that increasing demands are being placed on the PDU industry. Today’s cabinets typically draw 9-15 kW (with some even exceeding this) when it was only a few years ago that an average cabinet supported 3-4 kW, a power load only considered in a low-density environment today.
The modern data centre requires intelligent products that not only meet the minimum market requirements but exceed expectations in reliability, capability and quality. New products with more intelligent features such as remote control and switching, enterprise reporting and monitoring capabilities are now becoming available.
So what will be the key drivers of the PDU market in 2017?
High-density environments require more input power from the PDU. Extending three-phase connections to the rack boosts the amount of power you can deliver through each PDU, increasing rack density. It also simplifies load balancing across the three input phases coming into the data centre leading to improved efficiencies.
Increasing voltage allows lower amperage circuits, which use smaller conductors to deliver more power. Therefore, for ‘green field’ opportunities, it’s important to consider the deployment of three-phase PDUs that can take an input of 415V as against the traditional 208V.
As the need for availability continues to rise, data centre operators need to constantly monitor power across the entire power chain including Rack PDUs. Most Intelligent PDUs provide monitoring of voltage, amperage, power and energy at the input and branch circuit levels with threshold and notification capabilities. To be able to gain visibility and take proactive steps to reducing IT equipment energy consumption, there is also a growing trend to invest in Intelligent PDUs that provide monitoring capabilities down to the outlet level.
One of the biggest challenges associated with deployment of intelligent PDUs is the additional costs of networking all of the PDUs within the data centre. PDUs with Secure Array technology can help reduce these costs significantly by consolidating up to 32 PDUs under a single IP address with failover capabilities.
Increased temperatures in data centres improve cooling system efficiencies and lower cooling costs, but the exhaust temperature is also increasing. To prevent equipment failure due to overheating, it is necessary to deploy PDUs with high ambient temperature ratings.
From a security aspect, find a solution that supports network protocols with integrated security and various user authentication methods including HTTP/HTTPS, SNMP v1/v2/v3, RADIUS and LDAP integration and Secure Socket Layer (SSL). At the physical level, it is possible to prevent accidental disconnections with the use of locking outlet features, that click straight cords into place, and do not unplug during moves or when additions and changes are being made.
By measuring power at the rack or device level, operators can identify under-utilized or over-utilized capacity. PDUs with special switching capabilities also allow remote outlet control (ON/OFF) capability for every outlet, so that unused outlets can be turned off when not in use. Furthermore, by integrating intelligent PDUs with a centralised power management software solution, for example, operators can track power use over time and report costs of activities.
For more information on the key considerations for deploying intelligent PDUs in high-density environments, including how each feature plays an important role in the success of your power system, please click here to download Considerations for Intelligent Power Management within High-Density Deployment white paper from Chatsworth Products Inc (CPI).
Ashish Moondra is the Senior Product Manager for Power, Electronics and Software at Chatsworth Products (CPI). He has 20 years of experience developing, selling and managing rack power distribution, uninterruptible power supplies, energy storage and DCIM solutions. Ashish has previously worked with American Power Conversion, Emerson Network Power and Active Power.
Whether or not Albert Einstein really said “we cannot solve our problems with the same level of thinking that created them,” it is a germane thought for those designing and building our data centres today. One response to the increasing challenges of complexity in both data centre load and infrastructure is hyper-simplification.
By: Arun Shenoy, Vice President IT & DC Business, Schneider Electric UK
We live in a digital ‘always-on’ world in which data is accessible from a variety of increasingly portable devices anywhere at any time. Furthermore, this phenomenon is expanding rapidly with ever greater numbers of internet-connected devices and high-bandwidth information and entertainment services being delivered to increasing numbers of consumers around the world.
Some bold figures underpin the enormity of change likely in the near future: by 2050 some 2.5bn people will be living in cities throughout the world, according to the UN; by 2020 some 50 billion devices will be connected to the internet, according to Cisco; increasing industrialisation and connectivity demands will require a 50% increase in energy consumption by 2050, according to the International Energy Authority (IEA).
One of the most essential infrastructural building blocks for the information-based society is the data centre. Or more accurately, data centres because the variety and location of these diverse facilities is also growing rapidly in response to the many customer requirements and services that need to be delivered. By 2020, it is estimated, there will be a worldwide need for some 45.6 million square metres of data centre space to feed the services our global digital society expects.
Data centre design is a complex task with many different and often contradictory variables necessary for consideration: bandwidth, capacity, performance and security vie with cooling, power resilience and systems-management software for priority. All are restrained inevitably by cost considerations. Yet for the service providers who depend on this infrastructure for their very existence a data centre is just a basic building block for their business; what they most require is simplicity—of selection, deployment and operation.
A useful analogy is the Google experience. Google’s search engine and productivity tools are simple to access and use, customisable to each user’s specific needs and can be described as cost-efficient, whatever their level of use. Similarly, users of data centres want to be presented with “Google-like” simple choices, tailored to their own highly individual needs but which are easy to access and use, based on accepted standards and highly predictable in terms of total cost of ownership.
The trouble is that it typically takes a lot of effort to deliver something so simple. A data centre project can be an extremely complex task encompassing a variety of stakeholders, many of whom may not fully appreciate or have an interest in each other’s requirements.
A new site may require a myriad of expert contractors, including architects, prime building contractors, specialist tradesmen and technicians, planners, lawyers, telecoms infrastructure providers, waste-management agencies and environmental consultants. All of whom will only be peripherally involved with the IT and networking contractors that will fit out the data centre with the infrastructure to provide its core function; equipment which in the nature of things will evolve and change rapidly thanks to developments in technology and which may have implications for the design of the data centre that were not considered at the outset.
Additionally, the more complex a project, the more likely it is to experience the problem of over-engineering. The unnecessary inclusion of infrastructure, products and procedures that are superfluous to the purpose of the project as a whole is not to be confused with scalability or design for expansion. Over-engineering incurs unnecessary cost and may hinder future expansion by locking in commitments to a particular approach or design, which may not offer the most flexible upgrade options.
A new trend in the data centre industry is moving to alleviate this problem by tackling the challenge of DC hyper-simplification. In this Vendors such as Schneider Electric are exploiting lessons learned in the automotive and other manufacturing industries where simplification and customisation—two inherently contradictory requirements—are delivered through a ‘platform’ approach.
The platform approach is based on standardisation and modularity, where essential components, designed from the outset for easy integration with a wide variety of options, allow many customisable end products to be produced quickly, and simply tailored to individual customers’ particular requirements.
Most recently, Schneider Electric have embraced a prefabricated, modular approach to drive hyper-simplification into decision-making and deployment of facilities that match the exact requirements of their customers. This is only partly a strategy of product selection; hyper-simplification is a process that spans the entire data centre construction cycle from specification to design, deployment and ongoing development.
To assist that process from the outset, Schneider Electric makes available an array of software tools aimed at those engaged in infrastructure design so that they can calculate the effects of the inevitable trade-offs necessary when choosing one building block over another. These include trade-off calculators, budgetary tools and interactive 3D models which enable designers to visualise the layout of a data room before construction.
In addition, the company has utilised much of its R&D in creating a readily available set of digital tools to educate the customer and provide freedom in designing and finding user-references.
The literature, by Schneider Electric’s Data Centre Science Center, can be accessed online and includes white papers, training material and reference designs which identify real-world examples of data centres using both standardised and customised prefabricated, modular infrastructure.
Starting from a standardised platform, using fewer building blocks and with a wide choice of well documented modules it is possible to build scalable data centre infrastructure that is both personalised, in terms of meeting the specific challenges of the business, and predictable in terms of cost and performance.
Changes necessary to cope with expansion or emerging requirements such as increased cooling and power redundancy are easier to implement following a modular approach using products that are designed to be interoperable and supported with all the necessary documentation and implementation tools.
Not only is over-engineering of the product infrastructure avoided, but the simpler the upgrade process, the lower the cost of implementation because the number of specialist subcontractors needed to install and maintain additional infrastructure can be greatly reduced.
The simplification process is applicable to data centres of all sizes. For a small office-based facility a portable prefabricated “micro data centres” can be produced in a wheeled cabinet that can be installed in any available space whilst still providing resilient power, cooling and physical security. For larger facilities, customised data centres containing all the necessary server racks, cooling equipment, containment systems and power supplies can be prefabricated to order and delivered to a site on the back of a trailer as a temporary or permanent building.
Finally, for large purpose-built data centres serving a variety of customers or business functions, the modular approach provides the best combination of performance, personalisation and predictability at low cost.
Hyper-simplification is a complex process! Fortunately, in putting in all the effort to make their products interoperable, predictable and scalable, we can safely say that Schneider Electric have already done much of the complex work so that those in need of data centre deployments can focus on other challenges. Being on time, on budget, on spec and doing so throughout the data centre’s lifecycle should just be ‘business as usual’.
By Kristian Weatherley Kaye, Technical UPS Tender Manager
The latest battery capacity re-injection functionality from Socomec enables the UPS’ own batteries to directly check battery performance without the need to external load bank – the ultimate protection for your critical assets
Batteries are often cited as the most common cause of UPS system failure, the design of the battery system together with battery performance is a key element of any UPS system. Extending UPS battery life and back-up time through a programme of regular maintenance and management is vital for guaranteeing the ongoing performance of UPS systems and providing power security to organisation’s critical assets together with optimising the battery investment.
Batteries are the workhorse of every UPS system, they must be operating at peak performance in order for a UPS to guarantee the critical power supply, high temperatures, frequent cycles, deep discharge, high voltage recharge and a lack of regular maintenance will all reduce the lifecycle and performance of a battery.
Preventing outages and minimizing costly downtime are challenges faced by every Facilities Manager, the regular maintenance and replacement of batteries forms a critical element of every business continuity plan.
Although the UPS system plays a significant role in ensuring the availability, reliability and quality of the electrical supply, at the heart of any critical power protection system are the batteries; their effectiveness is essential for mitigating against load downtime. Batteries are, however the most vulnerable and failure-prone component of the UPS system.
One of the most frequent causes of unplanned outages in an UPS system is premature end of life of battery blocks. If undetected, a failing battery block can accelerate aging within the rest of the battery string, there by jeopardizing the integrity of supply to the critical load.
The single most effective way to ensure the reliability of the UPS system is to conduct preventive maintenance including regular battery checks and replacements.
Typically, in order to perform a safe and effective battery check, reviewing the operating environment and main battery parameters at string level - the UPS manufacturer will carry out a series of regular checks in order to keep the equipment operating at optimum levels and to avoid system downtime along with the associated risks of damage to the critical loads.
Every Facilities Manager will be familiar with the operating and infrastructure constraints associated with planning and executing regular UPS system checks. The design of the switchboard is one key point; availability and access to connection points for load test banks, as well as the management of high heat dissipation during the tests all require careful review, planning, risk assessments and the associated method statements. The safety of staff and building security both have to be carefully reviewed and managed during the load bank tests. Furthermore, the costs associated with the test process can be a burden on an already stretched operating budget. Whilst the manpower for such tests may be accounted for indirectly via the wider preventive maintenance budget, it is frequently necessary to incur other significant costs in terms of load bank and cable hire costs etc.
Socomec’s latest innovation with the Delphys Green Power and the Delphys Xtend Green Power range of UPS the process of conducting battery discharge testing is simplified.
Socomec’s innovative Battery Capacity Reinjection function enables the battery to be discharged to the upstream mains network through the UPS rectifier. This function is carried out online with the load fully protected. The UPS rectifier acts as a current generator synchronised to the mains voltage; the reinjected power is active power (KW) only, there is no reinjection of reactive power (kVAR). The reinjected current is sinusoidal, and therefore does not affect the LV installation. If mains power is lost during the test, the reinjection is automatically stopped with no effect to the system load. For an N+1 or 2N installation the system autonomy is ensured by the other units.
During a routine Battery Capacity Reinjection test carried out by a Socomec technician, the reinjected power is consumed by the other loads or UPS systems on the site. The reinjection test requires no changes to cabling within the existing installation. The battery discharge power (kW) is constant and configurable. This innovative Socomec UPS function enables the routine maintenance and testing of the critical UPS system and batteries to be easily carried out without the need for additional load banks or cabling. As well as the financial benefit of no longer needing to hire load banks and cabling, the Socomec reinjection function simplifies the operational planning associated with such tests.
To find out how to simplify your battery health checks – and to guarantee optimised performance – call 01285 863300 or email info.uk@socomec.com www.socomec.co.uk
Global bank builds its own hyperscaled storage.
By the SNIA Cloud Storage Initiative and the Data Protection and Capacity Optimization Committee.
Traditionally, enterprises have purchased highly available storage systems with built-in redundancy and dedicated storage controllers with proprietary firmware to meet enterprise storage demands, but these solutions are complex and expensive and growth of these types of systems has slowed. Some of the data is obviously moving to the cloud and that seems to be growing significantly. But few (Netflix mainly) have made the transition to total cloud-based IT. They principally cannot get all the services they need from today’s public cloud.
SNIA research indicates that many of these enterprises, in addition to utilizing cloud storage, are also building their own storage systems using software defined storage (SDS) and best-in-class commodity components, assembled in racks.
These enterprises are adopting methods from hyperscalers such as Amazon, Facebook, Google and Microsoft Azure to build their own storage systems while achieving the required service levels for internal projects. These private cloud storage systems are used to host the data for business critical applications and comply with regulations from multiple government jurisdictions.
Large enterprises still building their own datacenters may be increasing their use of the public cloud, but they also have requirements which are difficult to meet with the services currently available. Banking institutions in particular are subject to many audits from multiple jurisdictions of regulators. They essentially create their own private storage cloud to ensure regulatory compliance with the many and varied requirements each government mandates.
Hyperscale storage is built from best-in-class commodity storage components such as solid state disks and hard drives. These drives are typically delivered populated into racks and purchased as a “pod” with a dozen or more racks making up one storage “system.” Around half of the racks are populated with server “heads” to perform the software-defined storage (SDS) in a scale-out manner. The disk trays are JBODs – just a bunch of disks, and the SSD trays are JBOFs – just a bunch of flash. Another technique is essentially densely populated compute nodes where all the storage is directly attached.
The SDS and other custom built software handle the resiliency by replicating or using erasure code across the datacenter and geographically between datacenters. Performance scales almost linearly at least within each pod. They also have a “fail in place” policy, letting failed components exist powered down in the datacenter until a certain percentage of trays or racks is unusable, and only then replacing these larger units.
In discussions with an IT vice president of a large multinational bank, SNIA has learned something of the scale and operation of their internal infrastructure. It matches that of the large Internet hyperscalers both in size and by using the same procurement and operation techniques.
The bank has over 20 datacenters around the world. They cannot use the public cloud as they currently need to comply with over 200 government regulations from different countries. Their solution was to create an internal private cloud for the entire bank’s IT project usage. Their storage budget dwarfs the revenue of most medium sized storage vendors.
The bank’s deployed storage has 10s of thousands of nodes with around 200 petabytes of active data and half an exabyte of inactive data. Their overall data footprint is growing at 45% annually. They process trillions of transactions daily, and downtime is very expensive.
The bank is big enough that vendors will custom build for them, but they also have a policy of no single source for any of their hardware. They buy storage in 6 PB pods that are half CPU and half storage drives, pre-assembled and sold by the original design manufacturer. They installed their first such pod in 2015. Their next pod will be all flash. Most importantly their cost savings is projected to be 50% over traditional storage.
This institution uses SDS with the best-in-class commodity hardware to create a private cloud for internal IT projects and customer facing services. They are currently deploying about 11% of their storage as SDS today, but have a goal to grow this to 50% by 2020.
They license their SDS from a major vendor (a site license) and are training up their staff in the new approach. The bank virtualizes the hardware to abstract away any differences between the multiple vendors. From the storage service they serve up an S3 compatible interface for new projects and mainly provide block services for existing applications. They plan to look at Ceph for SDS and non-volatile memory express for solid state disk interfaces in the future.
The trend to build hyperscale storage infrastructures involves utilizing best-in-class commodity hardware and providing all the differentiation in the software to manage all aspects of the storage. Thus, all the “intelligence” is in the software, such that the infrastructure behavior is programmable via that software. Look for platforms running intelligent software on general-purpose hardware in the future. This is a trend enterprises should be closely following.
Want to learn more about hyperscaled enterprise storage and this Fortune 100 global bank’s implementation?
the SNIA white paper, “Hyperscaled Enterprise Storage” at: http://www.snia.org/hyperscaled
Traditionally, enterprises have purchased highly available storage systems with built-in redundancy and dedicated storage controllers with proprietary firmware to meet enterprise storage demands, but these solutions are complex and expensive and growth of these types of systems has slowed. Some of the data is obviously moving to the cloud and that seems to be growing significantly. But few (Netflix mainly) have made the transition to total cloud-based IT. They principally cannot get all the services they need from today’s public cloud.
SNIA research indicates that many of these enterprises, in addition to utilizing cloud storage, are also building their own storage systems using software defined storage (SDS) and best-in-class commodity components, assembled in racks.
These enterprises are adopting methods from hyperscalers such as Amazon, Facebook, Google and Microsoft Azure to build their own storage systems while achieving the required service levels for internal projects. These private cloud storage systems are used to host the data for business critical applications and comply with regulations from multiple government jurisdictions.
Large enterprises still building their own datacenters may be increasing their use of the public cloud, but they also have requirements which are difficult to meet with the services currently available. Banking institutions in particular are subject to many audits from multiple jurisdictions of regulators. They essentially create their own private storage cloud to ensure regulatory compliance with the many and varied requirements each government mandates.
Hyperscale storage is built from best-in-class commodity storage components such as solid state disks and hard drives. These drives are typically delivered populated into racks and purchased as a “pod” with a dozen or more racks making up one storage “system.” Around half of the racks are populated with server “heads” to perform the software-defined storage (SDS) in a scale-out manner. The disk trays are JBODs – just a bunch of disks, and the SSD trays are JBOFs – just a bunch of flash. Another technique is essentially densely populated compute nodes where all the storage is directly attached.
The SDS and other custom built software handle the resiliency by replicating or using erasure code across the datacenter and geographically between datacenters. Performance scales almost linearly at least within each pod. They also have a “fail in place” policy, letting failed components exist powered down in the datacenter until a certain percentage of trays or racks is unusable, and only then replacing these larger units.
In discussions with an IT vice president of a large multinational bank, SNIA has learned something of the scale and operation of their internal infrastructure. It matches that of the large Internet hyperscalers both in size and by using the same procurement and operation techniques.
The bank has over 20 datacenters around the world. They cannot use the public cloud as they currently need to comply with over 200 government regulations from different countries. Their solution was to create an internal private cloud for the entire bank’s IT project usage. Their storage budget dwarfs the revenue of most medium sized storage vendors.
The bank’s deployed storage has 10s of thousands of nodes with around 200 petabytes of active data and half an exabyte of inactive data. Their overall data footprint is growing at 45% annually. They process trillions of transactions daily, and downtime is very expensive.
The bank is big enough that vendors will custom build for them, but they also have a policy of no single source for any of their hardware. They buy storage in 6 PB pods that are half CPU and half storage drives, pre-assembled and sold by the original design manufacturer. They installed their first such pod in 2015. Their next pod will be all flash. Most importantly their cost savings is projected to be 50% over traditional storage.
This institution uses SDS with the best-in-class commodity hardware to create a private cloud for internal IT projects and customer facing services. They are currently deploying about 11% of their storage as SDS today, but have a goal to grow this to 50% by 2020.
They license their SDS from a major vendor (a site license) and are training up their staff in the new approach. The bank virtualizes the hardware to abstract away any differences between the multiple vendors. From the storage service they serve up an S3 compatible interface for new projects and mainly provide block services for existing applications. They plan to look at Ceph for SDS and non-volatile memory express for solid state disk interfaces in the future.
The trend to build hyperscale storage infrastructures involves utilizing best-in-class commodity hardware and providing all the differentiation in the software to manage all aspects of the storage. Thus, all the “intelligence” is in the software, such that the infrastructure behavior is programmable via that software. Look for platforms running intelligent software on general-purpose hardware in the future. This is a trend enterprises should be closely following.
Want to learn more about hyperscaled enterprise storage and this Fortune 100 global bank’s implementation?
the SNIA white paper, “Hyperscaled Enterprise Storage” at: http://www.snia.org/hyperscaled
About SNIA
The Storage Networking Industry Association (SNIA) is a non-profit organization made up of member companies spanning information technology.
A globally recognized and trusted authority, SNIA’s mission is to lead the storage industry in developing and promoting vendor-neutral architectures, standards and educational services that facilitate the efficient management, movement and security of information.
For more information, visit: www.snia.org
The data centre has come a long way since its origins in the early days of the computer industry, and inevitably occupies a key position in the thoughts of IT managers in businesses across all sectors.
By Kelly Murphy, Founder, HyperGrid.
One of the latest developments to take hold is the advent of HyperConverged Infrastructure, which is bringing vastly increased flexibility and efficiency to the modern data centre, while promising to significantly decrease the burden of capital expenditure.
In the early days of business IT, purchasing data centre space largely involved companies relying on turnkey solutions, wherein organisations would pay for a solution containing all the components required to fit their needs.
As time moved on and the world entered the dot-com bubble, the trend moved towards best-of-breed, where IT decision-makers began to focus on buying stand-alone pieces of hardware or software to fulfil particular tasks. The trend for best-of-breed solutions has been the prevalent one since the early 2000s. However, technological developments and a need for greater efficiency and flexibility in data centre infrastructure has meant that HyperConverged has now come to the fore.
HyperConverged works by integrating storage, network and compute resources into a software-centric architecture, housed in a single chassis. This offers a range of benefits which set it apart from traditional infrastructures, including significant reductions in CapEx and OpEx; greatly reduced power, cooling and space consumption; and easier management due to the integrated nature of the solution. In addition, HyperConverged technology is designed to be scalable in nature, meaning businesses can increase or decrease their usage as required on a pay-as-you-consume basis.
A recent survey by 451 Research has found that 40 per cent of organisations are now using HyperConverged infrastructure, with almost three-quarters of those with HyperConverged now using it in their core or central data centres, which indicates the strength of this shift. Effectively, data centre IT has come full circle, with HyperConverged infrastructure offering the all-round features that turnkey solutions used to provide, while also promising significant cost reduction and greatly increased efficiency. Following in the footsteps of SaaS.
The concept of Software-as-a-Service has become commonplace in business IT, and is routinely employed by organisations looking to take advantage of the flexibility and cost-effectiveness that it offers. As data centre technology advances and companies face continued demands to reduce costs and make their processes more efficient, Infrastructure-as-a-Service (IaaS) is also now making its mark.
IaaS has its roots in the move by many big businesses to cloud-based infrastructure, which changed the shape of the modern data centre and paved the way towards flexible, scalable IT infrastructure. As with SaaS, Infrastructure-as-a-Service reduces upfront expenditure by placing the hosting of hardware, servers, storage and other infrastructure components in the hands of a third party. This practice can also be expanded to offer Containers-as-a-Service and VDI-as-a-Service, meaning companies will have much more choice when it comes to selecting data centre services.
This need for adaptable solutions is now a key priority for many businesses, and the demand for infrastructure to be as scalable as possible is only going to grow in the coming months and years. As a result, HyperConverged providers will come under increased pressure to offer consumption-based pricing on their services, without the threat of vendor lock-in: the shift to cloud-based infrastructure came as part of a widespread desire for greater flexibility, so HyperConverged vendors should be prepared to offer this level of adaptability in their own solutions.
By adopting such an approach, HyperConverged providers can open up the benefits of their solutions to businesses of all sizes, not just the big companies that pioneered the move to cloud-based infrastructure. This is where the data centre market is heading: towards consumption-based HyperConverged infrastructure, with no upfront CapEx costs but all the advantages of a common HyperConverged solution.
Budgetary concerns and the urgent need to raise productivity across all aspects of businesses have led to IT practices and solutions being placed under greater scrutiny than ever before. As far as the data centre is concerned, HyperConverged promises to play a key role in its future development, as organisations look to strip out unnecessary capital expenditure costs and embrace scalable solutions that do not force them into long-term contracts with a specific vendor. Pay-as-you-consume options are set to lead the way in this regard, by making HyperConverged accessible to all.
In its sixth year as a UK event, the Managed Services & Hosting Summit Europe is now being staged in Amsterdam and will examine the issues facing Managed Service Providers, hosting companies, channel partners and suppliers as they seek to add value and evolve new business models and relationships. The Managed Services & Hosting Summit is firmly established as the leading managed services event for channel organisations and is expanding its coverage to new areas.
The year 2017 will see a number of new issues facing the industry – the ever-growing need for security is compounded by the new pressures on compliance, particularly in Europe with the details emerging of what is expected to be in the EU’s General Data Protection Regulation (GDPR).
The Managed Services & Hosting Summit – Europe 2017 features conference session presentations by major industry speakers and a range of breakout sessions exploring in further detail some of the major issues impacting the development of managed services. The summit will also provide extensive networking time for delegates to meet with potential business partners. The unique mix of high-level presentations plus the ability to meet, discuss and debate the related business issues with sponsors and peers across the industry, makes this a must-attend event for any senior decision maker in the ICT channel.
With the theme CREATING VALUE with MANAGED SERVICES, the event provides a unique European perspective on the issues, including the role of IT and how the way companies are buying it is changing, creating threats and opportunities for existing MSPs and new market entrants alike. The Managed Services and Hosting Summit Europe 2017 will focus on how the market is changing and what it takes for MSPs to succeed and create value, both for their clients and themselves within an increasingly competitive market.
Advertisement: 400GB At Light Speed Seminar
Amongst the issues discussed and examined will be:
The event also includes a CIO track during the afternoon in which speakers will outline what CIOs, CXOs and IT Directors should be looking for in selecting an MSP partner.
As well as technology issues, the event will hear from people with ideas on sales resources, training, recruitment and management – with an emphasis on the new management skills needed and how they differ from traditional customer systems management. In addition to the conference plenary and breakout sessions there will be ample time for delegates to network and engage with sponsors in the Demonstration & Networking Area during the morning, lunch and afternoon breaks.
New Galaxy VX UPS family combines ultra efficient, ECOnversion technology and Lithium-Ion batteries; meaning customers need no longer trade-off between efficiency and reliability. By Gael Souchet, Global Launch Manager, IT Division, 3 Phase UPS, Schneider Electric.
UPS systems deployed to safeguard against power outages in large data centres, with power ratings between 0.7 and 4MW, must meet a number of familiar challenges. They must be reliable in terms of operation and management of risk; they must be easy to maintain; they must be scalable; they must be able to preserve the value of capital investments over the long term, taking into account the flexibility to scale resources up or down, and they must also have low operating costs.
Schneider Electric’s Galaxy VX family of UPS systems is intended for use in just such large data centres and addresses all of these concerns with innovative technology, including the company’s novel 4-level inverter and ECOnversion power-saving mode and the use of space-saving Lithium Ion batteries. It is estimated that by utilising Galaxy VX in ECOnversion mode, energy savings of more than £407,000 can be made over a 10 year life of a 1,5MW UPS.
The new VX solutions are an extension of the existing VM family of UPS systems, offering models with higher power ratings of 0.75, 1.00, 1.25 and 1.5MW. These are intended to provide the power and scalability required for large data centres, allowing a ‘pay-as-you-grow’, modular approach as the power requirement increases in line with occupancy. Several VX systems can be connected in parallel to provide the necessary power increases.
This in itself helps control capital expenditure as it minimises the cost incurred by front loading a large facility with more infrastructure than is needed to support the comparatively small IT loads typically found in the early stages of a colocation facility’s working life. With the VX family, the UPS resource may be allocated according to need.
Also important from the point of view of capital expenditure is the solutions compatibility with existing power architectures. Thanks to its use of Lithium-Ion batteries, Galaxy VX UPS have a comparatively small physical footprint, as a result of which they are less likely to fall foul of space restrictions that may obtain in some data centres.
Reliability is an essential requirement for any UPS system and the VX series is designed with durability and longevity in mind. Typically, issues that affect reliability in UPS systems in practice revolve around fans, capacitors and batteries. All of the fans in the new systems are fully redundant and can easily be swapped out by users without needing to switch the unit into bypass mode.
In addition to this, the 4-level inverter technology drastically reduces the potential failures of power converters as it reduces the voltage stress on the main and other key components.
Menu-driven controls and improved firmware enable better management of batteries, leading to improved lifetimes and longer Mean Time To Replacement (MTTR). Such controls help to overcome earlier problems experienced with Lithium-Ion batteries, which were often caused by unfamiliarity and improper use. The new management firmware encourages procedures more appropriate to Lithium-Ion batteries thereby reducing the risk of failure. Indeed, if used properly Lithium-Ion batteries can last between 2 and 2.5 times longer than equivalent lead-acid batteries.
Maintenance of the new systems is also simplified thanks to the scalable approach which allows power modules to be swapped out easily. During initial deployment, safety of the more sensitive components can be enhanced thanks to the use of an I/O module which can be deployed in advance of the power modules. This allows cabling and other infrastructural components to be installed at an early stage, when the premises may be under construction or refurbishment, with the power modules added later when there is less danger of contamination.
The VX series offers major savings on operating costs thanks largely to Schneider’s 4-level inverter technology and proprietary ECOnversion mode. Although many UPS systems have evolved from 2-level to 3-level inverter configurations, Schneider’s patented 4-level inverter offers a number of advantages. It reduces voltage stress across each Insulated Gate Bipolar Transistor (IGBT) leading to greater reliability. It reduces switching losses, resulting in greater efficiency and less heat generation.
And it reduces the size of chokes in the system, thereby reducing size, saving weight and resulting in reduced choke failures owing to less heat being generated.
Many UPS systems have an “Economy Mode” in which some degree of electrical protection is sacrificed to save energy. The most common type of UPS in large data centres is the double-conversion online variety, in which mains power is rectified to DC before charging the backup battery, following which it passes through an inverter to be converted back to AC, hence double conversion.
When in double-conversion mode, power output from the UPS always passes through the inverter, providing a regular conditioned supply to the load, and there is no loss of power in the event of a mains blackout because the load is always connected to the inverter and battery backup.
However operating in this mode means there is constant wear on the power components with attendant reduction in Mean Time Between Failure (MTBF) and a knock-on effect on reliability.
In the most basic Economy-, or Eco-Mode, a manual bypass switch overrides the double conversion path and connects the load directly to the mains input. The UPS is now acting essentially as a simple standby UPS, in which the battery and its DC-AC inverter are only deployed in the event of a serious disruption in the mains supply.
As a consequence, this simple Eco-mode can help to save energy, with typical savings of between 2 and 3% a reasonable expectation. However, the use of standard eco-mode reduces power protection as the IT load is exposed to raw utility mains power without the conditioning normally provided by the double-conversion, online UPS.
The UPS must continuously monitor the mains power and quickly switch to the UPS inverter when a problem is detected, before the problem can affect the critical load.
Advertisement: 400GB At Light Speed Seminar
A better and more advanced eco-mode provides energy savings while offering enhanced power protection for connected loads than when operating in standard eco-mode. A primary example of this is Schneider Electric’s ECOnversion mode, in which the inverter runs in parallel with the bypass source, supplying the reactive part of the load and maintaining an input power factor close to unity. When operating in ECOnversion mode the load is never exposed directly to the unconditioned utility power, as is the case with standard Eco Mode.
Keeping the inverter on in ECOnversion mode has a small impact on efficiency; it can drop below the 99% rating of ECO Mode depending on the connected load. Using this mode, the inverter is not continuously regenerating output power to the load like in double conversion mode; instead the load is receiving unconditioned bypass power.
The main advantage of ECOnversion Mode is that the inverter can seamlessly take over to support the load in case of bypass utility failure. The inverter is also able to correct the power factor of the load and actively filter harmonic currents generated by the load. Operating in parallel with the bypass switch, the inverter is continually powered and ready to take over in the case of mains failure. In such cases the bypass utility supplying the UPS is disconnected from the inverter without changing the inverter output voltage level and the supplied load voltage level. The switch over is instantaneous and barely perceptible at the UPS voltage output.
Typically, the regulated power supplied by a UPS operating in double conversion mode is rated as a Class 1 supply while that coming from a UPS in standard Eco mode is rated as Class 3. A UPS in ECOnversion mode also provides a Class 1 output, so there is no longer a trade off between efficiency and reliability.
The Galaxy VX family of UPS systems extends the earlier VM range into the data centre with more powerful models intended for high power rating applications. With its built-in reliability and cost-saving features it provides advanced protection against power outages while keeping capital and operational costs under control.
DCS talks to Travis Irons, Director of Engineering at Server Technology, about the company’s decision to add Per Outlet Power Sensing to its HDOT Rack PDU range, expanding the portfolio designed to address density, capacity and remote power management challenges in the modern data centre.
Q: Please can you provide a brief overview of your award-winning High Density Outlet Technology (HDOT) Rack PDU in Switched and Smart POPS?
A: Server Technology has added POPS (Per Outlet Power Sensing) to its industry leading and award winning HDOT Alternating Phase Rack PDUs. This solution expands upon the most innovative power solution on the market, with solutions for density, capacity planning and remote power management in the modern data center.
HDOT Alternating Phase with POPS is the latest addition to the Server Technology rack PDU solution family. The POPS modules utilize the same chassis as the previously released Smart and Switched modules. These products are 36-outlet Build-Your-Own PDUs with HDOT Alt-Phase Switched Smart POPS outlets built on the PRO2 platform. These are rated to operate at 60°C, have color-coding available, and have two optional temperature and humidity sensors per PDU.
You get:
Q: In more detail, what does the HDOT provide?
A: To combat the limited physical space that PDUs compete for in the data center rack, Server Technology developed High Density Outlet Technology (HDOT), the smallest form factor PDU which significantly increases real estate in the back of the rack by fitting as many as 42 C13’s in a high kW, 42U network managed PDU —that’s over 20 percent denser than a comparable PDU using standard outlets. This was accomplished by removing the shell that surrounds commercially available C13 and C19 outlets and creating a series of multi-outlet modules in a variety of configurations that fit into a common monolithic enclosure. The HDOT design implements high native cord retention of over 12 pounds pull strength, reducing or eliminating the need for custom and costly ancillary locking cord devices. HDOT outlets are manufactured with robust high temperature materials carrying a UL94 V-0 flame rating, making these outlets ideally suited for the harshest data center environments.
Q: What are the benefits of Per Outlet Power Sensing (POPS)?
A: Per Outlet Power Sensing provides +/-1% billable-grade accuracy for energy consumption at each outlet for typical data center equipment loads. POPS also includes current, voltage, active power, apparent power, power factor, energy, and crest factor at each outlet. This provides for the ultimate in efficiency and capacity analysis. Also, use alerts for high current, high/low voltage, and low power factor for extended visibility. POPS technology is commonly used for capacity planning, granular bill back, and locating ‘zombie’ servers. Zombie servers are those that consume power but aren’t doing any computational work.
Q: Switched POPS v Smart POPS – what does each offer?
A: Switched POPS combines outlet level monitoring with outlet level switching. This is the most full featured technology. Smart POPS provides outlet monitoring only, and is useful in environments where remote switching is not needed or not allowed due to company policy.
Q: And how does the Server Technology PDU design help address the issue of easy load/phase balancing?
A: To simplify load balancing and cable management, Server Technology incorporates Alternating Phase technology into the three phase products of the HDOT family. In Alternating Phase technology, the phase of each single phase outlet alternates on an outlet by outlet basis rather than distributed in discreet separate banks. This simplifies phase and load balancing in that the right phase required to balance the system is always a short cable run from the equipment that is being connected. Shorter cable runs result in better airflow, lower total resistance in the data center which means less heat, and overall greater efficiencies. Prior to the advent of HDOT, Alternating Phase products were impractical to build due to the bulkiness of discreet commercially available outlets.
Q: And Server Technology’s PRO2 platform underpins this PDU technology?
A: PRO2 is a flexible and feature rich hardware and firmware platform, higher on board compute power, all modern security protocols, redundant features, and advanced customization all built into the product. The new PRO2 architecture is ideal in any situation where reliability and uptime are important, particularly in high temperature and high security applications. With PRO2, customers can maintain uptime with access to current data and future trends. Read the PRO2 FAQ for more detailed information.
Q: How does this Server Technology solution help end users with capacity planning?
A: With the PRO2 platform that incorporates input power monitoring, branch level monitoring, and POPS technology, paired with Sentry Power Manager (SPM), Server Technology’s award-winning power management solution, you get the most detailed view of power data within the cabinet and the data center. These tools provide the capability to securely monitor current, voltage, power (kW), apparent power, crest factor, accumulated energy (kWh), and power factor. The trending and monitoring features present a clear view of present and future capacity.
Q: How does it help address the issues around power density?
A: HDOT Alt phase provides for better efficiency. Stay Green. Save Green. This proprietary outlet design allows the users to fill narrow/shallow racks with 36 to 42 devices using 36 to 42 outlets. Since the PDU is available through our Build Your Own PDU online configurators, the user can order a PDU with their desired outlet configuration with the right outlets in the right place. With the addition of Alternating Phase Technology, these outlets allow the user to plug in devices from top to bottom or bottom to top without disrupting phase and load balance. This allows for shorter cords, which lowers cooling costs and simplifies cable inventory.
Q: In other words, how important is it for end users to have enough power density in the smallest form factor possible?
A: Data center real estate is often the most expensive floor space in an enterprise, so maximizing compute density is critical. Minimizing the size of the rack allows more racks to fit in the same floor space, which results in a higher compute density. The smaller the rack, the smaller the PDU needs to be to not interfere with airflow and equipment maintenance.
Q: And then there’s ‘good, old-fashioned’ uptime?
A: Server Technology’s products are designed to operate at full power and 60°C with a high MTBF. Since it is rare in a data center to run one PDU at full power for any length of time, or at maximum temperature, this adds greatly to the MTBF. PRO2 enables communications with a Master unit even when the Master has lost input power, by back feeding power to the network interface from a Link unit. The network interface is hot swappable in the field without changing the state of the outlets. The firmware in PRO2 allows even more opportunity for configureability and customization, while maintaining a clean and simple to use interface. Key benefits of a PRO2 chassis includes:
Q: Presumably, end users welcome the ability to move from multiple, legacy PDU models to one universal power strip?
A: Many customers are buying our new high density line of products for just that reason. It’s common in a data center to have a two or three rack configurations that have different requirements for the number of C13’s and C19’s. With HDOT, they can buy one PDU that has an outlet mix to cover all configurations. This simplifies purchasing, as well as the number and types of spare PDU’s that need to be kept on hand.
Q: And how important is it for end users to know that their PDU solution is compatible/integrates with their chosen DCIM solution?
A: Having confidence that your PDU will integrate with a current or not yet selected DCIM is very important. DCIM managers want a ‘single pane of glass’. Server Technology integrates with a broad range of DCIM software including all the major manufacturers. Our enterprise software, Sentry Power Manager (SPM) has an open API that is well documented and allows SPM to share critical power and environmental information with DCIM products. These services include key information like system, location, cabinet, outlet, PDU, phase, branch and sensor information for users that want to take advantage of SPMs features but also have a single pane of glass view.
Q: What work does Server Technology undertake to ensure such easy integration – ie work with specific DCIM vendors, or as many as possible?
A: Server Technology’s Solution Partner Program is designed to promote the integration of our advanced hardware and software technology with that of other industry leaders. Today the physical infrastructure layer of the data center is comprised of hundreds of hardware and software vendors including systems and network management software frameworks and the emerging category of asset management / DCIM providers, as well as the active hardware offerings from server, storage, networking and security vendors. Physical infrastructure also extends to the Rack and Cabinet providers, the cooling and distribution vendors and a wide range of environmental monitoring hardware vendors. There are so many variations of interfaces and mechanics, a proven ECO-system was needed to assist our customers with more easily solving the physical layer challenges surfacing in the modern data center.
The key to Server Technology’s program is its membership is comprised solely of complementary vendors that have known working integrations with Server Technology’s hardware and/or software. Essentially we are taking the guess-work out of combining these technologies and can point to specific documentation, contacts and examples where these pieces have been integrated already, allowing customers to do the same within their own companies. By working closely with our Solution Partners to assure a known level of demonstrable integration already exists, everybody wins- projects are completed faster and resources are consumed more effectively. It is this eco-system that motivates Server Technology to create the industry’s most tangible Solution Partner Program.
Q: We can’t not mention Server Technology’s ‘Build Your Own PDU’ approach – just how successful is this with end users?
A: End users as well as partners love the Build Your Own PDU configurator. The online configurator enables a customer to configure their perfect Switched POPS, Smart POPS, Switched only, or Smart only high-density rack PDU in Four Simple Steps. The Four-Step process is user friendly and guides the customer graphically through selecting plug type, input cord orientation, outlet configuration, connectivity and colours. With thousands of configurations possible, the customer is sure to find exactly the right solution for their application. See the Build-Your-Own PDU web page at https://byopdu.servertech.com/configure to configure your perfect Switched POPS, Smart POPS, Switched only, or Smart only high-density rack PDU.
Q: And such PDU design flexibility is hard to come by elsewhere in the market?
A: I don’t know of a competitor’s product line with as many offerings as we now have in the HDOT family.
Q: With such a successful PDU solution, it might seem churlish to ask about what’s next, but can you share any plans as to what we might see from Server Technology in the next year or so?
A: We will be adding POPS to our 48 outlet Alt Phase HDOT PRO2’s, and building out a horizontal product line with HDOT. After that there are a number of new ideas on the drawing board which you’ll hear about soon.
Q: Any other comments?
A: Server Technology was founded by the late Carrel Ewing, an Electrical Engineer. Our company is very Engineering centric, which is a philosophy being carried on by his son, our current President and CEO, Brandon Ewing. The company has quadrupled in size in the last ten years, but we still maintain a small company attitude, which puts high quality and customer service first. Only with Server Technology will customers stay powered, be supported and get ahead.
Financial Institute is a leading provider of global brokerage & trading related services for institutional investors & financial intermediaries.
They combine client-first service with innovative products, sophisticated strategies and proprietary technology to meet the challenges of increasingly dynamic and fast-paced markets. This Financial Institute (official company name is confidential) partners with more than 3,000 clients accessing over 100 global market centers to help them achieve their investment objectives.
The Financial Institute recently sought to re-architect their datacenters to provide additional throughput, reduced cabling, and improved efficiency. To achieve this, Financial Institute selected Cisco UCS hardware to provide a converged infrastructure capable of meeting the needs of Financial Institute’s customers. When presented with the challenge of powering (6) Cisco UCS enclosures in a single 42U rack, Cisco recommended that Financial Institute work with Server Technology to find a solution.
Power a mission critical VMWare private cloud running on UCS hardware.
The Financial Institute was first introduced to Server Technology when they were trying to fit (6) UCS chassis into a 42U cabinet on 50A 3-phase circuit. Each UCS enclosure requires (4) C19 power outlets to provide redundant power to the hardware. Previously, APC was the preferred PDU of choice. The Financial Institute’s decision maker was not happy with the available PDU offering as nothing met his density requirements. The decision maker from the Financial Institute spoke with Cisco, and Cisco suggested Server Technology as an alternative. The decision maker also looked at Raritan, but once the Financial Institute saw the Server Technology PDU working along with Sentry Power Manager (SPM) software, that helped “seal the deal.”
After speaking with his Server Technology representative and ST partner CSC, the decision maker and his team spent a few weeks considering alternatives, and finally selected a Switched PDU from Server Technology that offers (12) C19 and (12) C13 outlets. The particular PDU selected was a 50A 3-phase unit, allowing him to power all of the UCS gear in his rack and have leftover outlets, and power, to spare.
Having both high-strength cord retention and locking outlets made selecting Server Technology even easier, as most competitors must special order these outlets. The color coding option from STI was another bonus, as it helped the Financial Institute readily identify whether or not the hardware in the cabinets was properly cabled for redundancy. Also, other vendors’ didn’t offer the quick delivery that the solutions from Server Technology offered. “CSC was great. They came out to our datacenter, and helped make sure things (PDUs) would fit into our cabinets. And by selecting from three standard PDUs that Server Technology keeps in stock, we generally see delivery within a week of ordering.”
Deploying Server Technology’s Sentry Power Manager (SPM) also helped the Financial Institute maximize its return on investment in STI PDUs, by allowing the Financial Institute to remotely collect the power information available from all of the PDUs across its entire datacenter network.
Business benefits
DCS talks to Jason Howells, EMEA sales director, Intronis MSP Solutions, about how this part of Barracuda, the company renowned for offering storage and security technology, seeks to help Channel companies evolve from resellers to managed service providers, and help existing MSPs who are struggling to build managed services and compete in the Cloud era.
The problem facing almost all traditional IT vendors right now, is how do they stay relevant to both the Channel and end users, at a time when there seems to be major momentum behind the switch from buying and selling of boxes to the buying and selling of IT (managed) services? One possible, and popular, answer appears to be: ‘panic’! The other solution is to understand how the market is shifting, understand how this requires a change of thinking in the Channel, and to develop a dedicated Channel offering that enables resellers to develop a managed services business alongside their traditional sales pipeline.
Barracuda Networks has built up a successful security, application delivery and data protection business, since the company’s inception in 2002. The company’s product portfolio includes network security, web security, email security, backup, archiving and information management, server load balancing and access control. Barracuda Networks has 150,000+ customers in over 100 countries; employs 1000+ people, has some 5000+ partners and offices around the world, including the UK. These statistics are interesting as they demonstrate the challenge facing vendors, such as Barracuda Networks who want to evolve its business ecosystem with the changes in the industry and add some kind of a managed services/Cloud offering to its product portfolio . In other words, how do you cannibalise or migrate your business model from one where product sales are the norm, to one where service sales are likely to dominate?
Barracuda Networks has taken a bold, visionary step by acquiring the Intronis Online Cloud Backup and Recovery business. Founded just a year after Barracuda Networks, Intronis has developed a cloud platform purposed built for Managed Service Providers to have a backup service in the USA and Canada.
With over 2500 MSP’s currently subscribed , the plan is to offer a number of the Barracuda products and services via the Intronis MSP platform and it’s with this in mind that Jason Howells, who has run the UK & Ireland Barracuda business for the past five years, has been chosen to head up the Intronis MSP Solutions business across EMEA. Jason explains: “We’ve known for quite a while that some of our partners and MSPs were developing managed services and Cloud infrastructure for their customers using Barracuda products, and we wanted to be able to offer more help. So, where previously MSPs and resellers would be doing their own thing, and buying some products from us, we’re now offering them a purpose-built platform. The Intronis platform provides a multi-layered approach to today’s threat landscape combining security and data protection through a single cloud based management platform that also includes integrations with the leading professional automation and remote management tools in the market. MSPs can leverage this platform to broaden their current services they sell to their customers and streamline the management saving time and money. Jason continues: “My job is to head up Intronis’s international expansion, across the UK and Europe to start with, focusing on MSP recruitment as well as working with our loyal Channel base. It’s an exciting time as the Intronis offering means that we’re having totally different conversations with the Channel.
At a simple level, the acquisition of Intronis allows Barracuda to offer MSP’s a complete data protection and security portfolio – physical, virtual, software-only, and now Cloud. So, whatever the use case and wherever a particular partner is in the rapidly changing Channel market, there’s a Barracuda solution to fit.
Intronis MSP Solutions empowers MSPs to centrally deploy and manage a broad portfolio of data protection services from a fully rebrandable console. Powered by the Intronis ECHOplatform, these solutions enable MSPs to protect their customers’ business-critical files, folders, email, applications, and servers, locally and in the cloud. The Intronis ECHOplatform is provided at a fixed monthly fee for the entire product suite, without limiting MSPs’ customers’ local storage or number of devices protected. The Intronis ECHOplatform is operational from a UK data centre. Jason summarises: “The Intronis platform provides everything that an MSP needs to protect an SMB’s digital footprint - from one vendor, using one, integrated management platform, and with a price model that is different from other vendors.”
Several Barracuda products are now included in the Intronis MSP offering, including the relatively recent addition of Barracuda Essentials for Office 365. This offers a comprehensive and cost-effective way for MSPs to protect their customers using Microsoft’s cloud-based productivity suite against a broad range of email-based threats including phishing, zero-day vulnerabilities, malware, and spam. Advanced Threat Detection provides an important security layer that scans email attachments in most commonly used file formats and compares them against a cryptographic hash database.
Files of unknown status are neutralized in a sandbox environment within the Barracuda Cloud to observe behaviours, and the results are then pushed into the Barracuda Real Time System to provide protection for all customers. Emails found to contain malicious content are quarantined, and administrators and users can be notified. If no malicious content is found, regular mail processing rules apply.
In addition, with Barracuda Essentials for Office 365, customers can archive their emails using policy-based retention and search capabilities. This allows MSPs to easily and effectively help customers meet compliance requirements and address eDiscovery requests. MSPs can also automate the backup and recovery of Exchange Online emails and attachments, as well as OneDrive for Business files, to protect against data loss, corruption, and accidental deletion.
Jason explains: “We have a fantastic relationship with Microsoft, working closely with them and helping to drive consumption of the Azure platform, by helping our customers to be comfortable with migration to Microsoft Cloud.
Microsoft Office 365 does have some elements of security and archiving, but we’re enhancing this security and compliance through our Office 365 Essentials product. As more and more companies migrate to Office 365 and the Microsoft Cloud, MSPs and resellers are losing significant revenue. Intronis is giving these organisations enhanced security and compliance products that allow them to go back to their customers and start a new conversation, with the likely outcome that Microsoft + Intronis is the preferred solution
As the example above demonstrates, in this new, fast moving IT world, there’s a need for modern MSPs, who can keep up with the changes and opportunities these create. “Ten years ago, Barracuda was just a security company,” says Jason. “Now, we’ve broadened our portfolio significantly and we focus on security and data protection ensuring our products are easy to buy and deploy, physical, virtual and Cloud, because that’s what our customers are asking for. In the same way, I’m sure that the Channel can evolve to meet its customers’ demands and for Intronis in the UK and Europe, the opportunity is huge, we’re just scratching the surface right now. After all, if resellers don’t understand that their customers are heading to the Cloud, then they are likely to fall by the wayside.”
In terms of developing the Intronis business outside of North America, Jason is excited by the opportunity, but acknowledges that the key to expansion will be ensuring that the first chosen market, the UK, is established as a successful blue print, before heading to mainland Europe and further afield. Jason explains: “We’ve been very careful in merging Intronis into Barracuda , and this execution is going well. For Intronis, the UK is seen as the largest market outside the US to go after, and this provides me with an exciting opportunity to build the team and build out our MSP community. Clearly, establishing an Intronis presence across Europe, APAC and other markets is a huge opportunity, but we want to ensure that we grow at a rate that we can support. There’s no point in signing up hundreds of MSPs and then not being able to help them build, run and maintain their own services for their customers. After all, the Intronis portal is white-labelled, so it’s the MSPs who brand their own customer offerings, and they need to be confident in the technology and support behind these.”
Four ways an Intronis partnership sparked new growth for one MSP Of the many examples of Intronis’s success to date in North America, its work with OXEN Technology provides a particularly good illustration of just how an MSP can benefit from the Intronis cloud-based backup solution.
OXEN Technology, formerly known as Heartland Technology Solutions (HTS) got its start as a break-fix IT provider more than 31 years ago. Of all the changes HTS has undergone over the years, including acquiring four companies and transitioning from break-fix to managed services, the past 18 months have seen some of the biggest changes.
Last January, Bob Gentzler became the new president of the company. One of the first projects he became involved with was identifying areas that were impeding growth. A top culprit that emerged was HTS’ BDR offering. “Our former BDR solution was hosted in a co-location facility and managed by our technicians,” Gentzler says. “Our BDR vendor had a multi-tiered sales model, which was complicated to sell and difficult for customers to comprehend. Plus, we realised that we couldn’t scale the offering by hosting it ourselves.”
The MSP did an analysis of several leading BDR vendors offering cloud-based backup solutions and chose Intronis. “We liked the simplicity of the Intronis pricing model as well as the scalability of their offering,” says Gentzler. “These factors combined with their track record of investing in research and development gave them an edge over other options. Prior to Intronis, we on-boarded one BDR client every few months, and now we are bringing on two new BDR clients a month.”
After signing up to become an Intronis partner, HTS quickly learned that the value of the partnership went beyond BDR sales. One of the first areas Intronis assisted the MSP was with its business plan. “For a long time, our company wanted to create a business plan that would communicate our purpose, values, mission, financials, and SWOT [strengths, weaknesses, opportunities, and threats], but it never happened,” says Gentzler. “Intronis offers partners a one-page business plan, which they also use. I presented this to our board of directors, and we were able to adopt it for our company. Having this document in place enables us to succinctly communicate our shared vision, and it also creates energy and excitement throughout our entire organisation.”
HTS changed its pricing model last year, too, as a result of its partnership with Intronis. Previously, the company had a 20-item a la carte offering that put selecting IT solutions and services into the hands of the customer. “This was problematic as some customers simply don’t know what they want or need,” says Gentzler. “Intronis’ thought leadership materials played a key role in educating us about industry best practices, and we incorporated some of their ideas into our managed services bundled offerings. Today, we have four managed services bundled offerings that cover the needs of about 80 percent of our customers. This new pricing structure has greatly reduced our sales cycle and time spent on customization.”
Throughout the Midwest, there are more than 4,000 businesses named Heartland. Not surprisingly, trying to brand a company with this name is an exercise in futility. HTS made a bold move to change its name, and Gentzler says Intronis was involved in this company milestone, too. “Intronis sponsored and attended the company event where we officially announced our name change.”
The MSP wanted a name that wasn’t overly used but still represented its Midwestern values. “We’re not about selling the shiniest objects; we’re about making life simple for our clients,” says Gentzler. Three descriptive words that continuously rose to the top during the planning phase were: strong, trusted, and simple. From there, the MSP began looking for something that would embody each of its core traits, and it landed on the name OXEN. “Not only do OXEN exemplify these traits, they work closely with one another to get the job done,” says Gentzler. “Also, there are no other IT companies in the Midwest with a name like OXEN Technology. It is completely unique.”
After agreeing on the name, OXEN Technology, the MSP did a full rebranding, complete with a new website. After the new name went into effect, within a month Gentzler said it already sparked a lot of conversations with clients and prospects and has given his company an opportunity to explain and reiterate its core values and differentiators.
OXEN Technology is also in the midst of unprecedented growth. “We are projecting up to 50 percent revenue growth this year in managed and cloud services, and as much as 30 percent of our growth will come from backup and related services,” says Gentzler. “Out of 20 vendors we work with, Intronis has been the most proactive partner in providing useful information to help us grow our business. I get inundated with emails and newsletters from everywhere, but the only one I regularly open is from Intronis. In the same way OXEN work well together as a team, Intronis works very closely with its channel partners, adding strength, trust, and simplicity.”
The relationship between companies such as OXEN Technology and Intronis might be hard to follow, but Jason is confident that, by remaining forward thinking and ahead of the competition, Intronis in Europe has a bright future. “We’re perfectly, uniquely positioned as a vendor, with our data protection and security product portfolio, to provide a complete offering to MSPs. What’s more, as the Internet of Things (IoT) and the digital world develop over time, there’s only going to be an increased demand for what Barracuda Networks and Intronis offer. Billions of devices are going to need some kind of a firewall, connectivity and WAN optimisation will become more important than ever.”
Depressingly (at least for your correspondent who has been writing about storage and security for 15+ years!), Jason highlights that there’s still a great need for education amongst the end user community to understand just how important are the services that Intronis will enable MSPs to offer. Frequently, price is a stumbling block. But, as Jason explains in this brief example, can end users afford not to do anything around data protection? “We had an appliance on trial a few weeks ago, but the end user thought that it was a little bit expensive. As (bad) luck would have it, the very next day, the end user received a ransomware demand and the cost of the demand was higher than the cost of the appliance…”
The Intronis platform offers MSPs a series of thought leadership documents that allow them, and their customers, to understand issues such as the one outlined above. Alongside this education programme, as previously alluded to, the Intronis pricing model is a further aid to MSPs who are struggling to stay relevant in the Cloud world. The Intronis Fixed Price Plans provide an MSP with one predictable monthly fee for an entire suite of business continuity and disaster recovery (BCDR) services per SMB customer, no matter how many devices or locations.
Perhaps most interesting of all in terms of innovation and value, is the instant replacement service that Intronis offers MSPs for any appliances that they might have. After four years, the existing hardware is replaced by the latest model, free of charge.
In terms of the software, MSPs will have rather less of a wait for product upgrades and innovations(!) In the past three months, Barracuda Networks has introduced three new products/product upgrades, all of which are available via the Intronis platform. Most recently, Barracuda introduced expanded data protection functionality in its Barracuda Cloud-to-Cloud Backup offering, with the addition of Microsoft SharePoint Online backup. The new feature provides businesses using SharePoint Online granular backup and recovery options with the ability to restore individual files that have been deleted, corrupted, or encrypted by ransomware – without having to complete a full recovery of the SharePoint environment. With the new SharePoint Online functionality, Barracuda offers affordable solutions to help customers running Office 365 detect, prevent, and recover from malicious attacks, and enables resellers and managed service providers (MSPs) to offer customers more robust email security and data protection offerings.
In September, Barracuda introduced the new Barracuda Email Threat Scan for Office 365. Barracuda Email Threat Scan for Office 365 is a cloud-based service that gives customers an immediate view of their email security posture by identifying latent threats within their production corporate email environments. Available free of charge for a limited time, Barracuda Email Threat Scan detects Advanced Persistent Threats, and offers remediation guidance to remove threats and incorporate Advanced Threat Detection into an overall security strategy.
And in August, Barracuda launched a virtual version of Barracuda Backup – Intronis MSP Edition. The virtual appliance is sold as a subscription and protects both physical and virtual environments. It provides a flexible, scalable backup solution for MSPs that prefer to use their own storage or are looking to reduce their investment in dedicated hardware. Similar to the physical version of Barracuda Backup – Intronis MSP Edition, which was introduced earlier this year, Barracuda’s virtual appliance option offers a robust feature set including integrated backup, compression, and deduplication capabilities, along with cloud-based centralized management. Because it is a virtual appliance, MSPs can avoid physical hardware management, such as rack space, maintenance, and eventual replacement, and reduce power and cooling costs.
At a time when the Channel is struggling to adapt to life post-Cloud, the Barracuda Networks/Intronis approach to providing a ‘ready-made’ and easy to use data protection and security platform stands out as a major achievement in what is a sea of vendor confusion. While the ultimate threat/promise of utility computing is just a handful of companies providing all out IT needs, for the time being there’s a whole host of companies trying to build and maintain all manner of Cloud services. The significant managed services building block that Intronis provides should be an attractive proposition to MSPs and their customers alike.
DCS talks to IBM software-defined networking evangelist, Darren Parkes, about the enormous potential of Software-Defined Networking, helping end users to understand the many potential benefits that the technology offers, and how to implement a clear and realistic SDN strategy.
In a world where uncertainty appears to be the only certainty, it’s good to be able to report that the current hype surrounding software-defined networking (SDN) is based on rather more than smoke and mirrors. In simple terms, SDN offers many major benefits, which include the following:
Small wonder, then, that end users are keen to explore SDN and, in many cases, commit to some kind of an SDN programme within their organisations. However, with 200+ vendors offering SDN technology, small wonder that these same end users are not a little confused as to exactly what SDN can do for their own businesses, and how to start the SDN journey.
Embark on an unknown journey, and a knowledgeable and experienced guide tends to come in handy. In the case of SDN, enter IBM. Perhaps not the most obvious name in the networking space, but when one considers that there’s no shortage of SDN technology out there, but a real shortage of objective advice on which SDN architecture is best for any individual organisation to meet specific use cases, IBM fits the bill. IBM provides an unbiased approach to network transformation across multiple technologies and circuit providers. The company has a wealth of networking knowledge and experience and, just as importantly, a potentially unrivalled technology track record when it comes to understanding all of the IT infrastructure that surrounds the SDN play.
And if there is one single point to emphasise when it comes to implementing SDN, it’s the fact that a successful SDN project is one that delivers against a clients business imperatives and takes full account of an organisation’s current IT environment.
IBM’s Darren Parkes explains: “There are many vendor companies talking about SDN from their perspective and very many customers who appreciate the potential of the technology, but these end users want to know how to make the right decisions when it comes to adopting SDN. For example, they want to know: ‘Do I architect right across the data centre or can I create a SDN domain to support specific workloads?’ At IBM, we work closely with the client to identify the right technology for their needs, focussing on the improved operational and business outcome. So we’ll help a customer identify the problem(s) they want to solve and help them choose the right SDN solution(s) that will provide the agility and speed for which they are looking.In terms of architecture, there are two basic approaches to SDN: fabric-based (underlay) or overlay, with both options managed by software, controlling and orchestrating to deliver services. As Darren explains: “With the fabric-based approach, the software-defined agility is part of the fabric; with overlay, there’s a software-abstracted layer.”
The decision as to which of the two SDN ‘flavours’ best suits any individual end user’s requirements is a complicated one, but that’s where and why IBM’s breadth of capability across multiple SDN technology vendors comes in. As Darren explains: “With each customer we go through a series of gates, understanding the existing IT infrastructure (i.e. where the customer is on the virtualisation/software-defined journey), what outcome the customer wants to achieve – cloud connectivity, speed of adoption, agility, cost savings and the like – and taking notice of any potential non-negotiables.”
Depending on the customer requirements and objectives, the recommended approach and solution can range from advice to build on the same networking vendor and add their SDN capability, to considering multi-vendor SDN. And it’s more than likely that, if the end user is going for a data centre-wide SDN implementation, there could be more than one SDN implementation that makes up the overall solution. Happily for the customer, it’s IBM’s responsibility to understand the various networking nuances and to recommend the underlay and/or overlay approach, and then to help select specific vendors.
Darren is keen to emphasise that, key to success for the customer, SDN is all about aligning the use cases that deliver business outcomes, and not just the technology. “At a high level, SDN enables clients to deliver services in an agile fashion, at pace,” he explains. “For example, traditionally it might take, say, a bank three months to spin up a new service using its legacy data centre infrastructure and existing human resources, although the service users, let us say the head of foreign exchange, only wants the application for a few months due the fast pace of their given business, and then requires another change.
“Using the agility of SDN, there’s a pool of network resources available – fully aligned with storage and compute. So, if the head of foreign exchange wants a new application, it can be provisioned in minutes – three months can become 30 minutes, or less. And if the business changes their mind, then you can quickly deconstruct the service, for that group.”
As part of this process, SDN brings with it the very real possibility of broadening integration between the various IT disciplines and the DevOps teams, creating a much more joined-up, optimised approach to provisioning new applications.
However, rather than taking a one-size fits all approach, defining the unique use case for each customer is really important, according to Darren. “We had a customer who thought that they didn’t want SDN – they were very risk averse,” says Darren. “We asked them what they understood about SDN and we carried out an in-depth assessment of their data centre and told them that they didn’t need SDN across the whole data centre, but there was an application, a use case, worth exploring. They faced challenges around testing their network and, as they worked in the insurance industry, a great deal of time and red tape, even months of ‘lock down’, was spent when introducing new services, due to regulatory requirements. We explained how they could create an SDN test and development environment in the core of the data centre, which could deliver a big benefit back to the business by improving the speed of application development and the interaction with DevOps. The customer was extremely surprised that the SDN footprint could be that discrete.”
So, managing customer expectations – whether too gloomy or too optimistic – is an important part of the SDN puzzle. As Darren explains: “Often, SDN has the wow factor – it can transform the way a business runs – but does it require a new set of skills? For many customers, it’s nowhere near their old way of working, but they need to be happy that they have the necessary software programming expertise. Customers worry about whether they are ready for SDN or can they operate an SDN environment. And can they get the SDN and DevOp pieces to work together?”
Darren continues: “Often, the biggest challenge is from a cultural or operational perspective. If a company has been delivering network services in response to the demands of an applications needs downstream of the application design, then moving to a model where there is early engagement in the DevOps process requires a shift in thinking and operating.”
“And then there’s the architectural challenge – the need for the network to be so very robust that it can be fully trusted to deliver upon software-based instructions for application and service delivery. Plus a move from a traditional three-tier architecture to leaf and spine. Does the customer plan to do this in a phased approach or make the move in one go?”
That’s why IBM’s initial involvement in SDN is in carrying out an SDN readiness assessment – making a series of recommendations or steps to readiness. This advises on the necessary technology improvements required from the status quo into the future, along with the cultural and operational drivers. If necessary, there’ll be a red flag alert – highlighting something that must be done to allow and effective SDN implementation.
While each individual customer case might be different, it’s unlikely that any SDN solution will be fully effective without certain key components, which Darren summarises as: “Software-defined compute, storage and networking, together with enterprise-wide orchestration including Cloud connectivity – the whole data centre ‘in software’, that’s when you get to the pinnacle of the promise of SDN.”
Of course, it is possible to run SDN without the software-defined compute and/or storage piece, but that’s not where it has the greatest impact. However, plenty of customers are looking at SD-WAN solutions right now. As Darren puts it: “With the amount that a global organisation might be spending on a transit network plus a data centre network running into hundreds of millions of pounds, if you can integrate SD-WAN into the overall network infrastructure, there are substantial cost savings to be made, from adapting to business needs much faster to reducing time, and costs, for Add/Move/Change requests to be implemented.”
In terms of the customer evaluating their own suitability for SDN, maybe before calling in expert help, key points to consider include:
Openness is another issue which seems to be associated with SDN, as Darren puts it: “The world seems open to being more open. More businesses are seeing it as a smart way of doing something – taking elements from Vendor A, B and C, using OpenStack and putting a solution together – their chosen ingredients to make their specific recipe. Therefore, in the context of addressing specific customer outcomes, combining the right ingredients for open storage, servers and networks with Cloud services and enterprise-wide orchestration – I see a lot of upsides with open systems.”
As with any hot topic, there’s a very real danger of adopting the herd mentality with SDN – thinking that everyone is rushing headlong down the software-defined networking route. “When we speak to clients, we do emphasise that they don’t have to have it, depending on what it is they are doing today and what they want to achieve in the future,” says Darren. “For example, as a company looking at SDN, you might decide that, as you have 50 networking professionals who work well with your other IT departments, and the business overall, who can help stand up new applications in, say two months, which is an acceptable timeline, and that the organisation is making money, then SDN might not be needed.”
He continues: “However, even in this case, you need to understand when you might start to lose your competitive advantage. In the retail world, for example, there are companies who will take a design they have seen on the catwalk and have it on the shop floor within two to three weeks. They must have a pretty comprehensive, optimised technology environment that allows them to do this. If you are a competitor, and you take six weeks to carry out the same process, then your business could be jeopardised.”
The intention is not to scaremonger, simply to raise a thought-process and subsequent discussion around what it is a business
might be doing well, and what can be improved.
With some kind of Big Data capability becoming less of an optional extra, and more of a must-have, along with the predicted growth of the embryonic Internet of Things market, it’s unlikely that companies will continue to thrive in such an environment without an IT infrastructure that is flexible and agile, and fast. “For example, in a production environment, there might be a robot about to fail due to low oil pressure,” says Darren. “Thanks to the use of both IoT and analytics, you might get to know more about this situation, locally, and with the right network and IoT elements in place, then you are able to move fast enough to address and overcome the problem. Greater intelligence is all very well, it’s being able to take advantage of it…”
While confidentiality is always an issue when it comes to sharing real-world examples of so many IT success stories, Darren happily provides insight into the work that IBM is doing with several unnamed organisations across Europe: “We have an insurance industry client who wants to deliver services faster and innovate in this market sector,” says Darren. “We’ve worked with them on an SDN-readiness assessment, helping the company understand what they need to be doing, in what areas. We’re currently working through that to help define the use cases, from choosing the right architecture through to selecting the right vendor(s) solution and helping with the actual implementation.”
In the case of a retail bank, IBM is helping the company with an SDN overlay solution to provide Cloud connectivity, providing input on the high level design and use cases.
Staying in the banking industry, but moving into the investment sector, IBM has helped two organisations understand the benefits of, and hence bring forward, their SD-WAN implementations.
There’s still plenty of education required, according to Darren, not everyone sees the value that SDN can bring but in concluding he draws a parallel between wireless technology 15 or so years ago and SDN right now. “Wireless is 50 per cent technical and 50 per cent about the physical use cases,” Darren says. “And SDN is 50 per cent technology and 50 per cent about how it can be used and how end users can benefit from it – the company and the process architecture.”
Bearing in mind the prevalence of wireless today, the number of organisations that will begin to explore and start out on their own SDN journey can only be on the rise for some time ahead.
Across 2016, Data Centre Solutions conducted a
comprehensive survey of fire suppression in the data centre industry. As well as looking at what systems are in use
today, the survey, which was commissioned by the science-based technology company
3M, also investigated improvements required by users, what is driving users’
decisions to replace or upgrade systems, plus knowledge around changes
affecting fire suppression system usage. By Catherine Grainger, Brand and
Communications Manager for the Novec Brand family, 3M.
Encouragingly, there seems to be growing awareness of the regulatory factors directly impacting fire suppression usage. Above all, the results point towards a growing focus on the cost of purchasing and owning a fire suppression system, with downtime, maintenance, clean-up and total cost of ownership high on the list of concerns or areas to improve.
Before we take a closer look at all the results, here is some background on the survey respondents. Via an online questionnaire, data centre managers from a broad range of industries and the public sector answered a wide variety of detailed questions. Around two-thirds have a high level of influence or are mainly responsible for purchasing or specifying fire suppression systems, with a further 20 per cent having some responsibility.
While the UK had the biggest representation, respondents contributed from around the globe, including France, Italy, Spain, the UAE, Singapore, Russia, the USA, Bulgaria, Germany, Switzerland, the Czech Republic and Canada. The survey covered a broad spectrum across small, medium and large data centres.
The survey started by looking at current fire suppression methods, with CO2 systems accounting for almost half of all installed systems. Interestingly, when asked what were the disadvantages of current systems, of those who responded, 50 per cited issues with CO2 based systems, with manual operation and maintenance highest on the list of complaints.
Viewed by many people as the future of fire suppression, clean-agent systems – including Novec 1230 Fire Protection Fluid from 3M, HFC-227, inert and halon - are already used by approximately one third of the survey respondents. Only 15 per cent are using water/aqueous and a mere six per cent using foam-based systems.
Business continuity, performance and safety top the bill when survey respondents were asked which attributes matter most when selecting a fire suppression system. Speed of clean-up time after system discharge, avoiding damage to critical assets, minimal down-time, speed of suppression and human safety were the clear top five. The importance of minimising outage and disruption is perhaps unsurprising, given the nature of the data centre business, where downtime can result in attrition of revenue, customer satisfaction and market reputation.
Perhaps this also explains the growing interest in the current generation of clean agent fire protection systems, which are designed to avoid damage to valuable equipment, leave little or no residue and – depending on the system chosen – have strong health and safety profiles.Next on the list of criteria are total-cost-of-ownership (TCO) and cost of agent. It is important that these are viewed as two separate entities, since a wide variety of factors can influence lifetime cost of a fire suppression system; purchase of the actual agent being just one element. For instance, cost of downtime, number of cylinders required, footprint, emission coverage provided by the system, requirement of additional products or equipment to manage pressure, maintenance, volume of fluid, refilling and so on.
Advertisement: Rittal
Rittal demonstrates data center cooling configurations that keeps the warm and cold air separated to guarantee consistent cooling. Whatever your needs, Rittal has a cost effective solution.
Next on data centre managers’ list of fire suppression requirements are system design compliance with industry standards, environmental and legislative impact. The latter two demonstrate a growing awareness around the regulation and restrictions to which fire suppression systems are increasingly subject, such as the F-gas Regulations and the Montreal Protocol. Recent amendments to both regulations have highlighted a clear ‘phase down’ path for future use of HFCs in fire suppression systems.
For instance, in October 2016, delegates from 197 countries reached a landmark deal to phase-down use of the fastest growing greenhouse gases, hydrofluorocarbons (HFCs), commonly used in fire suppression equipment. Some regions – including the USA and Europe – must start cutting use of HFCs as early as 2019. Given that fire suppression systems have a typical life-span of 25 years, this has immediate impact on the purchase of new (and use of existing) products.
The good news is that more than 50 per cent of survey respondents are either ‘very’ or ‘somewhat’ aware of the impact of HFC-regulatory changes on the future use of fire suppression systems and around 60 per cent cite regulation as having an impact on decision making. However, conversely 34 per cent have little awareness and 19 per cent have none, which if they are using HFC-based systems, is concerning, since users have only a limited window of time within which to research, evaluate, select and install alternatives.
Fire suppression suppliers will no doubt be pleased to hear that just over half of respondents plan to upgrade an existing system or install a new one within the next 1-2 years. A further 13 per cent plan to do so within 3-5 years, with 21 per cent looking at a timescale of 5-10 years. Only 13 per cent are looking as far as ten years plus.
Cost outweighs all other factors when it comes to instigating change, with 43 per cent citing this as the biggest driver. However, while respondents had previously mentioned the importance of TCO, it only accounts for four per cent of change drivers. This might indicate that many data centre managers are focused on the ‘now’, rather than looking at the longer-term. This is a missed opportunity, since total cost-of-ownership calculations show that focusing on the up-front cost of a system or the agent being used is a false economy.
Next highest on the list of drivers for change are issues surrounding regulation, legislation, compliance, environmental issues and human safety. The fact that the latter does not score higher is probably because many data centre environments are largely unmanned, so human safety may be less of a concern compared to more public or populated locations.
The need to change systems to improve quality of fire suppression, efficiency and protection of assets are next on the list, with other reasons including building expansion or moving to new premises. Without more detail, it is speculative to comment on users’ concerns around the performance of existing fire suppression systems, but factors could include speed of extinguishment, or potential downtime caused by equipment damage and clean-up time.
Clearly, there are still some users who have limited interest or knowledge of the issues surrounding fire suppression selection and usage. Plus, while cost is high on the agenda, a better understanding of the factors involved – particularly total-cost-of-ownership – is required and perhaps this is a role for suppliers and manufacturers of fire suppression solutions. On a more positive note, awareness of performance
and downtime, plus regulatory pressures, seems to be increasingly widespread. In an era of change for fire suppression usage, this is good news for everyone concerned.