Writing the day before our
Managed Services & Hosting (MSH) event, it might seem a little bit
premature to start laying down the law when it comes to the world of managed
services and hosting.
However, in stating that the momentum behind this ICT sector is nigh on unstoppable, I suspect that I’m not risking too much egg on my face although, when it comes to the DCS readership, I might be accused of ‘teaching granny to suck eggs’. In the same way that many traditional, ‘legacy’ vendors have had to make the choice between trying to evolve slowly to stay relevant to their customer base or take a much more radical approach and all but start from scratch (or at least acquire a company that has started from scratch fairly recently); end users are faced with a stark choice – evolve you existing ICT infrastructure very slowly, sweating the old assets at the same time as bringing in little bits of the new, or, rip everything out and start again – and leave the ownership and running of the ICT function to a third party.
As has been pointed out many times before, for the vast majority of companies, the value of ICT is not in owning it, but in what it can do for your business. ICT ownership is undesirable - as the owner has to watch the assets lose substantial value, fail and grow obsolete very quickly. Compared to hiring in whatever ICT infrastructure and applications are needed as and when, ICT asset ownership sounds unattractive and cumbersome – the more so when it’s obvious that there are plenty of end users who have started out without the baggage of owning their own ICT, and are thriving in the dynamic, pay as you go/grow managed services and hosting world.
So, while there is a more in-depth review of MSH elsewhere in this issue (Page 24), I think it’s safe for me to predict, on MSH-eve, that anyone who still doesn’t believe in the benefits of managed services, colo and hosting, could well be going the way of the Do-do, the C5 and various other extinct historical species and objects.
The technologies on Gartner Inc.'s Hype Cycle for Emerging Technologies, 2016 reveal three distinct technology trends that are poised to be of the highest priority for organizations facing rapidly accelerating digital business innovation.
Transparently immersive experiences, the perceptual smart machine age, and the platform revolution are the three overarching technology trends that profoundly create new experiences with unrivaled intelligence and offer platforms that allow organizations to connect with new business ecosystems.
The Hype Cycle for Emerging Technologies report is the longest-running annual Gartner Hype Cycle, providing a cross-industry perspective on the technologies and trends that business strategists, chief innovation officers, R&D leaders, entrepreneurs, global market developers and emerging-technology teams should consider in developing emerging-technology portfolios.
"The Hype Cycle for Emerging Technologies is unique among most Hype Cycles because it distills insights from more than 2,000 technologies into a succinct set of must-know emerging technologies and trends that will have the single greatest impact on an organization's strategic planning," said Mike J. Walker, research director at Gartner. "This Hype Cycle specifically focuses on the set of technologies that is showing promise in delivering a high degree of competitive advantage over the next five to 10 years." (see Figure 1)
"To thrive in the digital economy, enterprise architects must continue to work with their CIOs and business leaders to proactively discover emerging technologies that will enable transformational business models for competitive advantage, maximize value through reduction of operating costs, and overcome legal and regulatory hurdles," said Mr. Walker. "This Hype Cycle provides a high-level view of important emerging trends that organizations must track, as well as the specific technologies that must be monitored."
Transparently immersive experiences: Technology will continue to become more human-centric to the point where it will introduce transparency between people, businesses and things. This relationship will become much more entwined as the evolution of technology becomes more adaptive, contextual and fluid within the workplace, at home, and interacting with businesses and other people.
Critical technologies to be considered include 4D Printing, Brain-Computer Interface, Human Augmentation, Volumetric Displays, Affective Computing, Connected Home, Nanotube Electronics, Augmented Reality, Virtual Reality and Gesture Control Devices.
The perceptual smart machine age: Smart machine technologies will be the most disruptive class of technologies over the next 10 years due to radical computational power, near-endless amounts of data, and unprecedented advances in deep neural networks that will allow organizations with smart machine technologies to harness data in order to adapt to new situations and solve problems that no one has encountered previously. Enterprises that are seeking leverage in this theme should consider the following technologies: Smart Dust, Machine Learning, Virtual Personal Assistants, Cognitive Expert Advisors, Smart Data Discovery, Smart Workspace, Conversational User Interfaces, Smart Robots, Commercial UAVs (Drones), Autonomous Vehicles, Natural-Language Question Answering, Personal Analytics, Enterprise Taxonomy and Ontology Management, Data Broker PaaS (dbrPaaS), and Context Brokering.
The platform revolution: Emerging technologies are revolutionizing the concepts of how platforms are defined and used. The shift from technical infrastructure to ecosystem-enabling platforms is laying the foundations for entirely new business models that are forming the bridge between humans and technology. Within these dynamic ecosystems, organizations must proactively understand and redefine their strategy to create platform-based business models, and to exploit internal and external algorithms in order to generate value. Key platform-enabling technologies to track include Neuromorphic Hardware, Quantum Computing, Blockchain, IoT Platform, Software-Defined Security and Software-Defined Anything (SDx).
"These trends illustrate that the more organizations are able to make technology an integral part of their employees', partners' and customers' experience, the more they will be able to connect their ecosystems to platforms in new and dynamic ways," said Mr. Walker. "Also, as smart machine technologies continue to evolve, they will become part of the human experience and the digital business ecosystem."
By 2020, 21 billion of Internet of Things (IoT) devices will be in use worldwide. Of these, close to 6 percent will be in use for industrial IoT applications. However, IT organizations have issues identifying these devices and characterising them as part of current network access policy, said Gartner, Inc. Infrastructure and operations (I&O) leaders must therefore update their network access policy to seamlessly address the onslaught of IoT devices.
"Having embraced a bring-your-own-device strategy, organizations must now get employee devices on the enterprise network and start addressing the 21 billion IoT devices that we project will want access to the enterprise network," said Tim Zimmermann, research vice president at Gartner. "Whether a video surveillance camera for a parking lot, a motion detector in a conference room or the HVAC for the entire building, the ability to identify, secure and isolate all IoT devices — and in particular "headless" devices — is more difficult to manage and secure.
"Many IoT devices will use the established bandwidth of the enterprise network provided by the IT organisation (wireless 1.3 Gbps of 802.11ac Wave 1 or 1.7 Gbps of 802.11ac Wave 2). However, it is important that the IT organisation works directly with facilities management (FM) and business units (BUs) to identify all devices and projects connected to the enterprise infrastructure and attaching to the network.
Once all of the devices attached to the network are identified, the IT organization must create or modify the network access policy as part of an enterprise policy enforcement strategy. This should determine if and how these devices will be connected, as well as what role they will be assigned that will govern their access.
In order to monitor access and priority of IoT devices, I&O leaders need to consider additional enterprise network best practices. These can be defining a connectivity policy, as many IoT devices will be connected via Wi-Fi; performing spectrum planning — many IoT devices may be using 2.4GHz, but may not be using 802.11 protocols such as Bluetooth, ZigBee or Z-Wave, which may create interference; or considering packet sniffers to identify devices that may do something undesirable on the network.
While more IoT devices are added to the enterprise network, I&O leaders will need to create virtual segments. These will allow network architects to separate all IoT assets (such as LED lights or a video camera) from other network traffic, supporting each FM application or BU process from other enterprise applications and users.
As the concept of virtual segments continues to mature, the capabilities will allow network architects to prioritize the traffic of differing virtual segments as compared with the rest of the traffic on the network. For example, security video traffic and normal enterprise application traffic may have a higher priority than LED lighting.
A newly published update to the Worldwide Semiannual IT Spending Guide: Vertical and Company Size from IDC finds that worldwide revenues for information technology products and services will grow from nearly $2.4 trillion in 2016 to more than $2.7 trillion in 2020. This represents a compound annual growth rate (CAGR) of 3.3% for the 2015-2020 forecast period.
Among the trends in the forecast is the positive momentum displayed in big industries like financial services and manufacturing, where companies continue to invest in 3rd Platform solutions (e.g. cloud, mobility, and Big Data) as part of their digital transformation efforts. The telecommunications industry is forecast to remain relatively sluggish, although spending levels are expected to gradually improve compared to the past several years. Combined, these four industries (banking, discrete manufacturing, process manufacturing, and telecommunications, which are also the industries with the largest IT expenditures) will generate nearly a third of worldwide IT revenues throughout the forecast.
Consumer purchases accounted for nearly a quarter of all IT revenues in 2015, thanks to the ongoing smartphone explosion. But consumer spending for PCs, tablets, and smartphones has been weakening, which will have a dampening effect on the IT market overall. Looking ahead, even the moderate growth forecast for the tablet market will be driven by commercial segments rather than consumer tablet sales.
"While the consumer and public sectors have dragged on overall IT spending so far in 2016, we see stronger momentum in other key industries including financial services and manufacturing," said Stephen Minton, vice president, Customer Insights and Analysis at IDC. "Enterprise investment in new project-based initiatives, including data analytics and collaborative applications, remains strong and mid-sized companies have been especially nimble when it comes to rapidly adopting 3rd Platform technologies and solutions. Assuming the economy remains stable in 2017, smaller businesses will also begin to climb aboard the 3rd Platform in greater numbers."
Healthcare will remain the fastest growing industry with a five-year CAGR of 5.7% despite concerns that spending growth may have peaked. Banking, media, and professional services will also experience solid growth with CAGRs of 4.9% and combined revenues of more than $475 billion in 2020. Elsewhere, gradual improvement is expected in the public sector, although government purchases of technology will continue to lag behind much of the private sector. Similarly, IT expenditures in the natural resources industry are forecast to recover as the price of oil rebounds from recent lows.
"In the U.S., the greatest near-term growth is expected among healthcare providers, professional services firms, banks and securities and investment services organizations," said Jessica Goepfert, program director, Customer Insights and Analysis at IDC. "These service-based organizations are turning to 3rd Platform technologies like mobility and big data to enable more productive and meaningful ways to engage with clients. In addition to these customer-centric priorities, businesses operating in regulated environments are also turning to technology to assist with compliance."
In terms of company size, more than 45% of all IT spending worldwide will come from very large businesses (more than 1,000 employees) while the small office category (the 70-plus million small businesses with 1-9 employees) will provide roughly one quarter of all IT spending throughout the forecast period. Medium (100-499 employees) and large (500-999 employees) business will see the fastest growth in IT spending, each with a CAGR of 4.4%.
"The small business market has been challenged by the economic slowdown in some regions but there is now some pent-up demand for IT assets in this segment, which will materialize as the economy begins to improve," Minton added. "Meanwhile, the strongest growth is still among mid-sized companies, which are more nimble than very large enterprises and less exposed to economic volatility than the smallest businesses."
The gradual movement of data centres from on-premise services toward hosted services such as colocation, managed hosting and cloud is an important shift in enterprise IT infrastructure , compelling data centre service providers to expand their portfolios to support hybrid environments. With enterprise IT infrastructure currently being a combination of traditional and cloud services, providers need to tailor their solutions to offer management and technical aid for a complex setup.
European Data Centre Services Market, is the new analysis that is part of the IT Services & Applications Growth Partnership Service program provided by the Frost & Sullivan’s Digital Transformation Team, which also includes subjects such as enterprise infrastructure transformation, physical to virtual back-up software, managed security services, mobile device management, software-defined networking and cloud computing.
Different service providers are pursuing different strategies. For instance, carrier-neutral colocation providers will pursue interconnections as an important business. Meanwhile, traditional hosting providers are
“Already, retail colocation providers are sealing partnerships with cloud providers to meet enterprise demand for cloud interconnections,” said Digital Transformation Research Analyst Shuba Ramkumar. “Colocation incorporating cloud capabilities of their own or by partnering with cloud service providers to enable different enterprise IT environments through managed services.providers are building cloud ecosystems to allow enterprises to operate in hybrid IT environments. There is increased focus on private networking models to deliver seamless access and increase security of data centre services.”
Although Western Europe is the growth hub for colocation and managed hosting in Europe, Eastern Europe is fast catching up with cloud and hosting technology trends. The overall European market, driven by enterprise need to outsource IT infrastructure management, is expected to grow from $5.65 billion in 2015 to $10.13 billion in 2020, at a compound annual growth rate of 12.3%.
The data centre service market will also continue benefitting from the cost efficiency of outsourcing IT infrastructure management and enterprises’ shortage of internal IT resources. The growth of content-heavy applications and focus on data analytics and application delivery will require organisations to outsource infrastructure management. Furthermore, the rising trend of the Internet of Things necessitates robust back-office computing, which can be delivered through data centre services.
“The managed hosting market is still favoured by enterprises for applications with predictable utilization and which need dedicated servers for optimal performance,” noted Ramkumar. “Simultaneously, server virtualization technology is gaining traction, with managed hosting customers investing in managed virtualized servers instead of dedicated physical servers in the traditional hosting model.”
Hyperscale datacenter operators such as Facebook, Google and Microsoft (LinkedIn’s new parent) are implementing custom open hardware and software designs under the Open Compute Project’s (OCP) standards. However, operators outside of this small group of hyperscales are finding it challenging to procure components, and to test and certify OCP-based hardware in-house.
By Jeffrey Fidacaro, Senior Analyst, Datacenter Technologies, 451 Group.LinkedIn formed Open19, an alternative set of open standards to OCP, to leverage existing server motherboards and power- and data-connectivity technologies in open rack-scale designs. Proponents believe Open19 could be adopted by a much broader group of datacenter operators.
Open19 is based on the widely used standard 19-inch racks (as opposed to the 21-inch OCP standard), and includes standards for form factor, power distribution (centralized using blind-mate connectors) and network connectivity (blind-mate connected copper twinax cables). IT hardware elements such as servers and storage can fit into a 2RU (rack unit) size or smaller. This report delves into more detail on the proposed Open19 rack standards.
Innovative infrastructure designs in hardware and software have enabled the large hyperscale operators to enjoy growing competitive advantages. While many of those designs have been made available to others through OCP, obstacles such as component sourcing, proprietary technologies, and hardware testing and compatibility have limited adoption by enterprise, colocation and datacenter hosting operators. LinkedIn is championing the Open19 initiative to better address these issues and correct what it views as missteps by OCP. The initial focus on cost-effective rack-scale designs and agile integration leverages existing technologies and components. Open19 is backed by a solid list of initial partners, but still has quite a bit of ground to cover to catch up to OCP. However, unlike OCP, which overturns existing architectures, Open19 designs could be adopted alongside, making it likely to have a broader appeal. The challenge remains to attract not only hyperscales, but also datacenter operators that value availability and uptime and make long-term investments.
Hyperscale datacenter operators – companies such as Facebook, Google and Microsoft – are implementing custom open hardware and software designs to optimize their datacenter infrastructure for their specific application needs. The benefit has been significant gains in competitive advantages. For the most part, these custom designs fall under the Open Compute Project’s ecosystem. (The development of the OCP effort has been widely tracked by 451 Research – for more details see previous reports listed below.)
While the large hyperscale players have embraced OCP, broader adoption of the OCP standards has, in some ways, been slower than anticipated. This can be attributed to a number of hurdles facing customers outside of an elite group of large hyperscale operators, with two of the largest issues being the challenges in sourcing components for OCP-designed hardware, which is typically procured in smaller lots at a fraction of what hyperscales order in scale, and internally staffing the engineering competence necessary to test and certify OCP equipment themselves.
Open19 looks to sidestep these issues by using any 19-inch rack and retrofitting standard motherboards into Open19 enclosures, thereby eliminating the need for special OCP-based server form factors and racks. The Open19 rack-scale design strives to make component procurement easier – a strong case for LinkedIn to pursue an alternative standard to OCP – and to simplify rack integration.
In a recent 451 Research report, we highlighted a third OCP adoption issue: datacenter Tier certification. In a move that will likely prove important for the increased acceptance of OCP, the Uptime Institute, which is part of The 451 Group, recently assessed OCP’s technology and designs, and confirmed that it is entirely possible to build and operate a Tier-compliant datacenter using OCP technology. If OCP can be certified, then it seems very likely there would be no issues for Open19 as well. This is a particularly important development for OCP, as well as for Open19 adoption by enterprise, colocation and hosting datacenter operators.
The ambition behind Open19 is to develop a low-cost ecosystem of suppliers and systems integrators for rack-scale infrastructure. With a focus mostly on innovation at the rack level, the goal of the Open19 initiative is to provide lower cost per rack (by about 50%), lower cost per server, optimized power utilization, faster rack integration and (ultimately) broader adoption relative to OCP by leveraging existing technologies and components that are relatively easy to procure. The Open19 open standards address form factor, power distribution and connectors, network connectivity, and rack management. We address these factors in more detail below.
LinkedIn is in the final stages of design, and expects to have systems in its lab in early September. Preliminary specifications for the Open19 components (brick cages, brick servers, power shelf and networking) are anticipated in early October.
One of the key differences is that the Open19 standards can be integrated with any 19-inch rack (off the shelf), compared with OCP Open Rack designs that are typically 21 inches wide and available from a limited number of suppliers. Open19 also utilizes standard motherboards that can be fitted into enclosures that form a single ‘brick,’ ‘double brick’ and ‘double high brick’ server form factor. The largest size is equivalent to a standard 2RU. (Note that OCP’s Open Rack uses a slightly taller rack unit, called an OpenU (or OU), that is 48mm high versus a traditional RU of 44.5mm.) It is our understanding that the smaller 19-inch OCP motherboards could also be potentially fitted into an Open19 enclosure to ensure compatibility and interoperability with an Open19 rack.
Rather than individual server power suppliers (one or two per server node), Open19 favors centralized power distribution (similar to OCP) using a 12V power supply shelf. This has shown to be a big power-saving design point for OCP, and reduces heat generation. Instead of using the cableless bus bars for distribution found in OCP designs, Open19 uses blind-mate (snap-on) connectors to clip into a power cable. A blind-mate connector is expected to cost a fraction of a bus-bar connector, helping to lower the commons cost in the rack.
Will Open19 consider power design alternatives, such as the 48V design that Google recently contributed to OCP? Open19 says it’s not likely – recall that its goal is to simplify rack integration, and 48V power distribution will require extra components to step it down to 12V and meet the standard input voltage for current motherboards.
OCP’s Open Rack contains two PDUs to supply both AC and DC power. Battery backup units (BBUs) inside each OCP power shelf in a rack can supply DC power directly to the servers in the event that AC power is lost. Open19 standards also provide for an optional BBU (most likely lithium-ion) in the rack. The combination of power supply units and BBUs in a rack under both the OCP and Open19 standards could eliminate the need for expensive centralized uninterruptable power supplies.
The most likely Ethernet speed upgrade path for server connections is looking like 10Gbps to 25Gbps to 100Gbps. According to Panduit, a leader in physical infrastructure including datacenter and rack connectivity, 25Gb provides a more efficient use of hardware and a logical upgrade path to 100Gb. The Open19 initiative agrees – its rack data connectivity offering is based on four channels of three-meter 25Gb twinax (copper) cables using blind-mate connectors to clip into mechanical holes in the back of the Open19 rack cage. This will enable up to 100Gb per server brick (4x25Gb), and grows linearly with double bricks up to 200Gb, and double high bricks up to 400Gb.
The use of twinax cables and blind-mate connectors is expected to make Open19’s offering much more cost-effective and to reduce rack integration times compared with OCP, which connects traditional DAC (direct-attached copper) cables using QSFP (quad small form-factor pluggable) connectors. Out-of-rack network connectivity under Open19 is expected to still require the use of optical cables, and it is assessing optical connections in the rack for future designs.
The Open19 network switch will be open hardware and software, but differs from standard top-of-rack switches available today. Based on LinkedIn’s past experience with building some of its own networking switches, it believes Open19 switches could cost approximately 25% less than traditional switches. The Open19 networking standards also eliminate the need for local integrated fabrics – found in blade server chassis, for instance – since the aggregation of the rack network and connection to the datacenter fabric will be performed by the Open19 network switch.
Each server in an Open19 rack is independent and needs to have its own cooling system. LinkedIn designed self-cooling racks for its Oregon datacenter, including servers that were open from front to back to aid in the cooling process. The concept of self-cooled elements, whereby the motherboard suppliers are responsible for local fans and temperature sensors, for instance, differs from some of the open space centralized cooling designs from OCP. LinkedIn’s approach is more aligned with the traditional industry approach, which puts the onus on server suppliers to engineer internal airflow and cooling. Suppliers and some customers may find this more conservative approach less risky.
With Open19, LinkedIn aspires to make rack integration two to three times faster. Systems integrators such as Flextronics, Sanmina, Celestica and Hyve Solutions are all Open19 partners. Testing and validating interoperability, mentioned earlier as a key issue for OCP adoption, is expected to be a separate layer, but executed more by the Open19 ecosystem (which is built around suppliers) than left as the responsibility of the end user. Some initial testing partners include Flextronics and StackVelocity (a division in Jabil Circuit), which also design, integrate and validate OCP equipment. The Open19 standards are ultimately expected to allow customers to ‘rack and roll.’
Open19 racks will be fully disaggregated, meaning each element in the rack is self-sustained and connected to a management network. Off-the-shelf management systems will work, but LinkedIn is helping to develop specific open management software that could be contributed to Open19 as an open source project. To be sure, Open19 promotes out-of-rack (collective) management – with no rack-level controller or supervisor. This may have some implications – and possibly bring opportunities – for DCIM suppliers.
The Open19 project has attracted a solid list of 21 partners including LinkedIn, although nearly two-thirds are also members of OCP, with five classified as either OCP platinum or gold member status.
This suggests the partners believe their individual experiences in open source designs will translate easily, and are likely hedging their bets as they chase incremental market opportunities. We expect the list of partners to continue to grow – LinkedIn is currently in discussions with many more suppliers to support the Open19 standards.
On a final note, LinkedIn’s new parent, Microsoft, interestingly joined OCP about two years ago, and is currently standardizing its datacenter hardware on OCP-based designs. So it’s unclear at the moment as to how the two disparate datacenter infrastructure strategies will play out – or potentially converge.
By Richard Austin, SNIA Security Technical Work Group member.
“Security” is an often-discussed word these days and most everyone agrees that having more of it is better as if one could walk into the nearest consultancy and say “I’d like 3 tons of security please.” The problem is that security is not really a thing at all but rather a sort of condition. This “secure” condition can be said to exist when all the risks an organization faces are managed down to its risk tolerance at some point in time.
“Security” is an often-discussed word these days and most everyone agrees that having more of it is better as if one could walk into the nearest consultancy and say “I’d like 3 tons of security please.” The problem is that security is not really a thing at all but rather a sort of condition. This “secure” condition can be said to exist when all the risks an organization faces are managed down to its risk tolerance at some point in time.
There are a lot of odd words in that last sentence “risk”, “management” and “risk tolerance” and it turns out that most of the confusion about this thing called “security” is based on a lack of understanding of those terms.
We all have an idea of what risk is - we may have an auto accident on the way to work this morning, it may rain today, the stock market may move in an unexpected direction, our business competitor may introduce a significant new product. Each of those possible situations has some things in common:
We could put those three factors together into a sort of equation that says that risk is determined by the likelihood of the loss and the size of the loss reduced by whatever we put in place to mitigate the risk.
Risk = f (Loss, Likelihood) - Mitigations
As you apply more mitigations, risk will decrease so how do you know when you’re done? Organizations have a tolerance for risk that varies – some are very conservative and have a very low tolerance for risk. Some, for example a dynamic start-up, have much higher tolerances. You are finished mitigating risks when the remaining risk is within the tolerance of your organization (or you run out of budget, whichever comes first).
A cautionary note: the information security industry today spends a lot of time talking about compliance, which basically means verifiably doing what some (hopefully) knowledgeable and responsible body has decided is necessary. For example, if you process payment card information, you must comply with the PCI-DSS requirements, or if you store or process personal health information in the USA, you must comply with HIPAA, and the list goes on. However, “security” is a different thing than “compliance”. Compliance basically means that you have followed all the applicable items on some checklist while security implies that all the important risks in your environment have been mitigated to an acceptable level.
Risk management is basically concerned with assuring that an organization identifies the risks in its environment and mitigates them as much as possible. In an ideal world, there would be a risk assessment, which produced an ordered list of risks, highest to lowest, and we would manage those risks by applying controls (mitigations).
For example, we might purchase and install a firewall to limit the types of traffic incoming to our corporate webservers; we might purchase an e-mail gateway that could scan all the e-mails coming into our domain for malware and so on. These mitigations have costs both in acquiring them and operating them and our budget, supply of qualified people, etc., will always be limited so it’s important that we apply mitigations where they will produce the greatest reductions in risk for our organization.
Risk is very much dependent on the particular organization and its circumstances so it’s not really possible to talk about specific risks in a general way. The things that give rise to risk can be discussed much more generally so let’s take a closer look at the things which may give risk to risk in our particular environment.
Threats make use of (exploit) vulnerabilities (“weakness of an asset or control5 that can be exploited by a threat”, ISO/IEC 27032). Examples of vulnerabilities are the ever- present software defects that keep our patch teams busy or a careless employee who clicks on a link in an EMAIL from a stranger with the subject “I Love You!”
Successful exploit of a vulnerability causes an event (“identified occurrence of a system or network state indicating a possible breach of information security policy or failure of controls, or a previously unknown situation that may be security relevant”, ISO/IEC 27000) which may have consequences that interest us (again, this isn’t certain as, for example, the adversaries’ desired consequence may be blocked by a control such as an anti-malware product blocking the malicious link in the “I Love You!” e-mail). Another useful concept when thinking about threat agents and vulnerabilities is threat vector (ISO refers to this as an attack vector “path or means by which an attacker can gain access to a computer or network server in order to deliver a malicious outcome”, ISO/IEC 27032). A threat vector is the line that connects a threat agent to the vulnerability. In the “I Love You” example, the e-mail is the threat vector.
If the consequence is significant enough, it may give rise to a security incident (“single or series of unwanted information security events that have a significant probability of compromising business operations or threatening information security”, ISO/IEC 27000). An incident indicates that the adversary likely has achieved their objective in penetrating our defenses.
Though the terms may seem foreign and somewhat cryptic, they embody an approach to estimating the likelihood and magnitude of a loss. Likelihood depends on threats and vulnerabilities (and how easily a threat may exploit a vulnerability) while consequences measure the loss. Storage management is quite powerful and also fairly exposed as it must be accessible to those charged with managing the storage infrastructure (whether present locally or remotely), vendor support personnel (usually remote) and auditors. Couple this with the common practice of out of-band management over a TCP/IP network and it becomes an attractive target for adversaries. And, if there is one area where our adversaries excel, it’s in mounting attacks across TCP/IP networks. ISO/IEC 27040 provides solid guidance in securing storage management and that guidance is summarized in the whitepaper.
The SNIA Security experts have just published a Storage Security: An Overview as applied to Storage Management whitepaper, which is located on the SNIA.org/Security webpage. To learn more about the SNIA and its storage security work on other topics such as: sanitization, encryption, key management, and the SNIA specification on Transport Layer Security, please visit:
By Steve Hone, CEO, Data Centre Alliance
There are almost as many definitions for a smart city as there are vendors selling smart city solutions, these range from IoT (Internet of Things) connected devices to active traffic management and all points in between.
To me, a smart city is exactly that, a smart city. So what does a smart city actually mean?
Essentially a smart city is a central command and control system, it needs to be able to access all systems and more importantly interact with those systems in a sensible safe manner. A truly smart city is where its infrastructure, power, water, gas and communications networks links to its buildings, transport systems, supply chain and logistics for the benefit of the users (you and me) and to reduce the use of resources used across the entire cityscape by its citizens.
Ideally, everything and everybody (yes us) would be identifiable and visible to a network of control and monitoring systems that assist us in navigating our daily lives.
Take an average person, who works in the city. He/she is connected via his smart device (normally a phone). As they leave home and climb into their electric smart car, the best route to their place of work is calculated. On route an accident blocks his optimum route the system knows this, and changes the route (a bit like a sat navigation system), this route will take marginally longer, so the smart device automatically updates their colleagues/boss that they will be slightly delayed. On arrival at work they swipe their access card through the turnstiles, the building system fires up their work station, turns on the pc, and tells the coffee machine to start brewing their latte, just the way they like it.
While at work, having already selected what they fancy for dinner, the fridge at home has already taken an inventory of its contents and based on what’s missing and needed has placed an order via an online supermarket for what he will need to make his evening meal based on the best deals of the day. When leaving the office they notify the central system that they are heading home. The system tracks their progress, starts running a bath and preheating the oven based on his real time ETA.
Well although this may all sound like a back to the future movie, almost everything described above is actually available today, the problem is it is made up of a network of separate systems acting completely independent from each other. There appears to be very little interconnection or communication between these systems, so currently I’d say that we have a few hurdles to overcome before the above vison become a reality of everyday life.
So what does this mean for the world of data centres? Well, one things for sure an effective smart city involved a massive amount of number crunching and to handle that we need to have the right infrastructure in the right places to handle it. There has been much talk lately above a move to the edge.
Some large technology companies now experimenting with local “edge” data centres and there is plenty of companies that are prepared to assist in the “move to the edge”
So, what is the edge? Well in the data centre world “edge” in effect is a network of smaller data centres that we would classed as T1/2, i.e. no resilience, no backup generators and probably no cooling.
These pods, which are very similar to mobile phone towers, operate in the local environment and over time they will become as ubiquitous as those green boxes you see at road junctions, which control the traffic signals. They will be connected via fibre or mobile phones systems to the other edge data centres in their cells, users will be simultaneously connected to two or more cells as a minimum, and local data processing will take place there real time rather than being routed back to the traditional larger DC facilities we have now.
Why is this better? well the answer is simple “service delivery speed”, for example - unless you want you smart car to crash into someone else we need to make damn sure it’s able to react and respond faster than you can!.
The use of colocation and cloud based services is now common place, however Ironically it was not that long ago that every organisation had a little server room chugging away under the stairs, (some still do), I think with the edge computing solution, which is largely driven by the need to process local/smart city information in near or actual real time to ensure it delivers what we want, we may well be seeing the start of a return to small server hubs/buildings or rooms, although this time around they will use renewable energy, reuse any heat and be energy optimised, they will be hot swappable, and they will be self-healing.
In summary- There is no doubt that we are moving ever closer to the IOT/smart city vision and I am sure we will all benefit from the advantages it will bring, however to realise this vision we first need to make sure we have the right infrastructure in the right place and more importantly the network in place to support it, this I fear is currently the weak link in the chain due to the sheer amount of data a truly smart city will generate.
I would like to thank all the DCA Trade Association members who have contributed to this month’s DCA journal this a special thanks to John Booth from Carbon3IT for helping me with this forward.
If you would like to submit an article in the next edition then please contact Kelly Edmond at: kellye@datacentrealliance.org
For close to 150 years energy has been seen as a necessary cost burden with limited control over its use. But there is a growing demand from energy users to take more control of their energy. The result? An opportunity for competitive advantage and a new era for energy supply. By Russell Park, Customer Solutions Manager for British Gas Business
As early as 1868 the first hydro-electric power station was designed and built by Lord Armstrong at Cragside, England. It used water from lakes on his estate to power Siemens dynamos. The electricity supplied power to lights, heating, produced hot water, ran an elevator as well as labour-saving devices and farm buildings.
Then in 1882 the first public power station, the Edison Electric Light Station, was built in London. This supplied electricity to premises in the area that could be reached through the culverts of the viaduct without digging up the road, which was then a monopoly of the gas companies.
In the same year in New York, the Pearl Street Station was established also by Edison to provide electric lighting in the lower Manhattan Island area.
Earlier this year Jorge Pikunic - MD of Centrica’s recently created Distributed Energy and Power business - referred to these pioneering ventures when he talked about some much more recent innovations in energy, ‘the internet of things’ and smart cities at the World Summit on Innovation & Entrepreneurship at the United Nations in New York.
He spoke about current trends in energy, a critical component in the operation of Data Centres, and how energy services providers are responding to the trends that are fundamentally changing energy as we know it.
The first trend is the growth of renewable energy. Statistics published by the UK Government this summer showed that a quarter of the country’s electricity was generated from renewables last year – an increase of 29% on 2014. Nearly half of this (48%) came from wind power alone with renewable energy sources overall outstripping coal-fired generation for the first time.
Second, is that people’s attitudes towards energy are subsequently changing. Suppliers now accept that their customers want choice and more control. It’s no longer enough to simply offer the cheapest tariff.
The third, and perhaps the biggest contributor to change, is the availability and sophistication of new technology.
Electricity storage as just one example is poised to become an established and affordable technology. The price of lithium-ion batteries fell by 53% between 2012 and 2015 with Tesla investing $5bn in the world’s largest factory, or ‘Gigafactory’ which is claimed will produce over half a million batteries each year. Their expectation is that costs will fall further still to bring prices per kWh of energy storage down to under $100 by 2020. Pikunic also highlighted the importance of changes in digital technology and data analytics as the ‘internet of things’ looks set to have a fundamental impact on the way we do business. It’s estimated that the world will have 50 billion connected devices by 2020 and this will be reflected in our energy systems.
A survey of UK businesses late last year showed that around half had started to work on a strategy but were struggling to finish it. Big data and analytics can be the first step in the journey for many as a tool to give them visibility into their operations by analysing energy usage.
For example, using non-invasive wireless sensor technology coupled with cloud-based analytics, Centrica’s Panoramic Power solution brings consumers all the data they need to identify maintenance, operations and energy opportunities across their business footprint. The first action that customers typically choose to take is the installation of energy efficiency measures with potential savings in the region of up to 15%, maybe even more, off their bills.
But it needn’t stop there. Customers can also use this digital insight to create operational efficiencies and even predict equipment failures before they happen. The system works by learning usage patterns for any device such as lighting, HVAC systems and production lines, which it then uses to understand and monitor device sequencing, detect anomalies and automatically generate operational insights. Digital technology is also opening up the opportunity for businesses to harness the flexibility of their assets, using demand side response to unlock the hidden value of their assets through new revenue streams.
Energy users can become small energy generators in their own right by hooking them up to Energy Control Centres with trading experts working to get value from them, turning assets down or off for short periods of time with no impact on their operations. The aim is to save money by reducing ‘use of system’ charges while selling the flexibility of their assets to the relevant markets and dispatching them in the same way that they would dispatch a power station.
The potential benefits of this approach are wide-ranging with smart energy management helping customers to save, or even make money, while taking the strain off conventional centralised generation plants and an increasingly shaky electricity grid.
Quite simply, the three trends of renewables, changing consumer attitudes, and accessibility of technology mean that the days of the passive consumer are gone.
It also means that the location of energy – where energy is generated and managed – is changing. Energy will be generated closer to the point of consumption, and suppliers are seeing the emergence of local energy systems and micro-grids.
When asked what the internet of things means for energy, Pikunic finished with this: “It’s the democratisation of energy. From a handful of players today to tens of thousands of large energy users and millions of households and small companies - all playing their part in the future of energy.”
Not too dissimilar to Lord Armstrong’s idea nearly 150 years ago.
Russell Park is Customer Solutions Manager for British Gas Business. To get in touch or find out more, go to:
www.centrica.com/takecontrolBy Nicholas Jeffery, Director of Data Centre Solutions Group, CBRE
The “smart city” might sound like an idea from the future, but that future is now, in fact, many key capabilities already exist today. Remote building environmental controls, traffic mapping apps, automated parking systems – a wide range of smart technologies are up and running in municipalities around the world.
Technological hurdles remain, of course, but one of the main challenges of smart city development is not so much the creation of new technology, but the better implementation and integration of those currently in use. It’s a matter of connecting the individual silos that already exist, using the example of my morning commute into work to illustrate the challenge.
For my drive into London, I might use a smartphone app like Waze, a community-based traffic and navigation system that plots the most efficient route based on real-time traffic data generated by its users. Then, upon entering central London, I’ll pay the city’s congestion charge and then make way to my office where I’ll search for a parking space. With better integration, however, my smartphone app might move from picking the best route to then helping me find parking once I neared my office.
The application could tell me “Nicholas, the best car parking space that has space left is not the one you usually go to, please allow me to direct you to another one”. And when I get to that space, it is already reserved for me, and when I drive in the camera recognizes my car and it tags me as arriving at 9:15 in the morning, when I leave it tags me on the way out and bills me.
Additionally, the congestion charge system might not only assess the charge but also gather information on where and when I entered the zone and at what point and when I left it, allowing city traffic flow managers to better understand the migration of traffic through the capital.
Central to all this is the ability to handle, coordinate, and analyse the vast quantities of data a well-integrated smart city would generate.
People are expecting devices like autonomous vehicles and their refrigerators and home security systems and mobile devices to all be connected. And that means connecting and processing the vast amounts of data these devices produce in meaningful ways that lead to meaningful improvements in quality of life.
My team at CBRE sees data centre as being the beating heart of a smart city. Because without a robust and reliant data centre (regardless if it is in the cloud or not), you can’t collect, analyse, store and archive all of the data you are gathering from a smart city or the IOT devices. I see it like the Vitruvian man where the brain is the IOT device making all the intelligent decisions, the veins are the wireless and wireline access lines pumping data around the city to the extremities like the fingers and toes. As we know the network has to be strong otherwise the city/body will experience poor circulation / packet loss to the remote buildings such as utilities and remote offices.
My colleagues and I have begun presenting this notion of “data centres as being a big part of a smart city” to leaders of municipalities around the world and have received an enormous response.
Additionally, CBRE has gathered into one group resources addressing the full range of smart city needs, ranging from smart building technologies to labour force analytics. The company is also building strategic alliances with firms that have needed know-how in other areas key to smart city development, like mass transit, for instance.
And while it might at first seem curious that a real estate firm would take such a central role in smart city development—as opposed to, for instance, a Silicon Valley tech stalwart—it makes sense once you consider the fact that much of the data essential to a smart city’s operations is generated in real estate.
From smart parking to transportation to green and automated office space to the locations of the municipal data centres themselves, real estate is central to the very notion of a smart city.
Understanding the role of new thermal management strategies and intelligent technologies. By Luca Rozzoni, European Business Development Manager, Chatsworth Products Inc. (CPI)
New demands around cloud computing, big data and infrastructure efficiency mean that the data centre is currently undergoing a massive period of change. This is being driven by more users and more data, combined with more reliance on the infrastructure that make up the data centre.
With private cloud technologies and the evolution of the Internet of Things (IoT), working with the right data centre optimisation technologies to ensure an uninterrupted service of the highest quality has never been more important.
Data centre managers need to understand how to better control their resources, align with the business and create greater levels of efficiency that can keep up with modern business demands. Data centre operators have to be more proactive than ever, reshaping their strategies to allow for greater capacity and expansion across the various areas of the IT infrastructure. It’s vital to identify where resources are currently allocated and how they can best be optimised. New thermal management strategies and intelligent technologies have the potential to play a key role in achieving this. Additionally, data centre administrators have to face other concerns such as operating and upgrading costs, redundancy and uptime. And in parallel to this growing demand, there are energy efficiency targets that must be met to address current environmental laws.
To successfully sustain higher power densities in the data centre, it is crucial to define the power requirements and monitor power use. Resource needs will fluctuate so it’s important not to be limited by architecture that specifies limited power capabilities. Data centres need to be looking for technologies that allow for Tier IV operation, as they present no potential points of failure around redundant systems. Like power, cooling is also critical to keep operations running efficiently. Cooling energy inefficiencies in the data centre can be caused by: poor separation of hot and cold air, causing loss of cooling effectiveness; air leaking through cabinets, allowing hot air circulation back into equipment inlets instead of flowing back into the CRAC units; and airflow obstructions that constrict cooling airflow. New kinds of cooling and energy efficiency technologies can help organisations achieve the coveted LEED certification/ BREEAM Certification, which is one of the most reputable efficiency marks a facility can obtain today.
As rack heat densities approach and increase above 5kW, cooling optimisation technologies are able to offer methods such as containment systems, cabinets with enhanced sealing features and energy-efficient computer room layouts.
New types of aisle containment systems address thermal management, improving data centre operational efficiency and reclaiming lost power. It’s critical to ensure that airflow is well controlled and that hot/cold aisle containment is in place. Aisle containment provides physical separation of cold and hot exhaust air by enclosing the hot or cold aisle or ducting hot air away from cabinets with “chimneys” that facilitate a cool air supply to equipment air intakes at the desired, uniform temperature.
Hot aisle containment or ducted cabinets provide similar results. In airflow management, the separation of hot and cold air within the server room is the first critical step to maximising cooling system efficiency. Once airflows are separated, there is a wide range of adjustments to cooling systems that reduce operating cost and increase efficiencies. A successful airflow management also increases “free cooling” hours. Optimising the data centre not only helps an organisation regain control over valuable resources, it helps to plan for the future. The data centre will continue to evolve and expand and new technologies will affect how you deploy resources, optimise workloads and even integrate cloud computing. A best practice approach will ensure the data centre runs more efficiently both now and into the future.
To achieve this, it is vital to consider several key areas. The first is to address airflow management, separating hot and cold air within your equipment rooms to boost cold air running through equipment. Secondly, you also need to remove constraints around critical airflow design, opening the door to higher power and heat densities. High-density data centres feature robust airflow management design and practices where the cabinets function as a complete isolation barrier between supply and return air. Tracking rack conditions and environmental variables is also vital. Keeping track of environment variables will help create a more efficient rack design. Some servers generate more heat, while others may need more power. By seeing what system is taking up which resources, administrators can better position their environment for optimal use.
The monitoring of both power and cooling are also an essential part of an overall best practice approach. It’s imperative to always monitor the power consumption rates of your environment and look for ways to save on power based on requirements. Also, as space becomes a concern, consider the adoption of systems that can support space-conscious upgrade cycles and equipment capable of higher heat/ power densities while still using the same space.
Cooling monitoring can be outlined as part of a Service Level Agreement (SLA). Alternatively, an organisation can monitor cooling manually. New kinds of cooling systems can help support cloud systems, new levels of convergence and a quickly evolving business model. Monitoring uptime and status reports is also essential, regularly checking individual system uptime reports and keeping an eye on the status of various systems. Having an aggregate report will help administrators better understand how their environment is performing and enable managers to make efficiency modifications.
Finally, it is worth considering budgeting for new airflow and HVAC optimisation systems. For example, with the ducted exhaust system, every bit of cold air produced by the HVAC system has to go through a server. The only path between supply air and return air is one of heat transfer through a server, so there is no waste. There is no bypass or need for the overprovisioning that is required in standard hot aisle/cold aisle data centres (normally 200/300%).
To have an optimally running data centre that can support technologies such as convergence and cloud computing, your organisation will have to take data centre infrastructure to a new level. New kinds of cooling technologies and power systems combined with a carefully planned best practice approach aim to create an even healthier data centre ecosystem capable of evolving with the Internet of Things and new, emerging trends.
Further information on the best practices outlined in this article can be found in
‘Data Centre Optimisation: A Guide to Creating Better Efficiency and Improving Rack Heat Density in Air Cooled Facilities’,
a new white paper by Chatsworth Products Inc. For a copy, please visit:
http://pages.chatsworth.com/data-center-optimization-whitepaper0816.html
By Dr Richard Govier, Technical Director, Socomec UK
The relentless increase in the demand for energy is a major driver of change in the current energy landscape. Combined with the gradual depletion of fossil fuels and the increasing emphasis on the reduction of greenhouse gas emissions, the large scale integration of solar and other renewable energy sources into electrical grids – especially low voltage networks – is becoming increasingly important across the entire energy industry.
These energy sources are, however, inherently unpredictable in their nature. Balancing renewable energy sources with variable demand is a key benefit of smart grid developments, which monitor and control generation and demand in real time, using the latest smart metering and monitoring technology. The level of detail and accuracy of information available via a smart grid supports more targeted carbon reduction initiatives and investments, enabling the Government and other bodies to more effectively deploy resource.
As well as supporting the transition towards a low carbon world, smart grids create an opportunity to re-dress the balance within the energy ecosystem, enabling consumers to play a more active role than ever before. Furthermore, this transition will bring increased energy security through more granular levels of monitoring and control.
The most advanced digital technologies can monitor and link communities that are generating their own electricity – encouraging more responsible energy usage and enabling entire districts to become self-sufficient in terms of managing supply and demand.
The transformation of the energy sector represents a significant opportunity for every district to take control of its energy supply and demand, enhancing an area’s natural assets and optimising production and usage across cities and entire districts. Of particular importance, are the savings that can be realised via the implementation of a smart grid – for both individuals and organisation. In order to maximise potential savings, it is vital to start with the accurate measurement and monitoring of consumption. By analysing accurate, real time data – and introducing optimisation scenarios – it is possible to make substantial gains across multiple points of consumption.
With smart monitoring being a key enabler of the smart grid, the latest intelligent energy solutions for buildings, networks and districts will not only minimise consumer bills but will enable increased consumer and community participation. The most advanced smart monitoring systems provide businesses with the unsurpassed accuracy in terms of their unique usage – enabling suppliers to offer tariffs that more accurately reflect usage and reward businesses and consumers for using energy at off-peak, lower cost times – or even generating energy at peak times.
According to Ofgem’s Smart Grid Vision, it is estimated that by 2050, smart grids will reduce the cost of additional distribution reinforcement needed to accommodate the connection of low carbon technologies such as solar PV by between £2.5bn and £12bn – representing a 20 – 30 % cost reduction. By more efficiently using network assets and reducing the need to invest in costly infrastructure, the cost passed through to the consumer is reduced. The Government report, citing the Ernst and Young report for SmartGrid GB also predicts wider economic benefits, estimating that the development of smart grids could lead to approximately £13bn of Gross Value Added between now and 2050 if sufficient investment is made. In addition, the release of existing network capacity would enable faster, cheaper connections for generators and business customers, enabling network operators to make better use of existing assets.
The large scale integration of decentralized renewable energy sources into electrical grids at a localised level is a key step in bringing the distribution of renewable energy production closer to consumers.
With pioneering pilot sites across Europe, including Nice Grid, smart grids will revolutionise the way that we generate and use energy, and will ultimately change that way that we live and work – developing cleaner energy sources and showing us all how to consume more effectively.
Nice Grid is the first smart solar neighbourhood in Europe and acts as a technology showcase and pilot site for smart and environmentally sustainable grid operation and management; truly disruptive in terms of technology, it enables the islanding of an entire district. The global targets to reduce harmful emissions associated with the production of electrical power require a massive integration of renewable energy into our electricity networks. In order to support this significant integration – whilst also maintaining the quality of the networks – intelligent, decentralised systems are required to intermittently store the energy produced and to regulate distribution according to demand.
With over 45 years’ expertise in energy conversion plus experience in electrical distribution and photovoltaic technology, the Nice Grid project was a natural fit for integrated power specialist Socomec.
Colin Dean, Managing Director, Socomec UK, explains; “The aim of Nice Grid is to demonstrate that the local management of variable energy flow is achievable via efficient energy storage solutions. Having recognised this opportunity, the engineering teams at Socomec have drawn on the group’s half century of expertise in energy conversion to develop modular energy storage converters – SUNSYS –with a capacity of 33kW to MW (33kW to 100 kW per unit, to put in parallel).These dual-function storage converters enable the photovoltaic energy available during the day to be stored in cyclic batteries, then converted and fed back into the network as usable AC current that is injected into the grid. We call this bi-directional conversion – the capability to manage energy supply to meet demand. Furthermore, these bi-directional storage converters can be programmed to operate according to a charging and discharging profile, set in advance by the energy utility provider.”
A low voltage network is subject to constant fluctuations in solar energy production, resulting in voltage variations. Introducing control via the energy storage converters - using specific control algorithms – mitigates against these fluctuations. The converters are managed at district level via power management systems in order to continuously optimise the electricity supply. By controlling and enhancing the production of solar energy – combined with a reduction in consumption via the continuous monitoring of buildings and industry – it is possible to create a predominantly renewable energy mix.
Nice Grid provides storage solutions within the low voltage electrical networks at district level – as well as ‘islanding’. The ‘islanding’ means that an area can be electrically self-sufficient for up to four hours in case of incidents or congestion within the main network.
Socomec’s SUNSYS PCS2 power conversion and storage system will feed the public low-voltage network by maintaining the two key parameters, voltage and frequency, without rotating machinery. The customer can be supplied with electricity produced and managed – electronically. A real revolution, Nice Grid allows the technology at our disposal today to be used for the collective benefit of consumers.
Energy storage is an essential component of tomorrow’s smart electrical grids. For these innovative solutions to be deployed more widely, it is crucial that they are economically viable. Legislative change is essential to allow the implementation of storage solutions on the network, to define recovery mechanisms for all services provided and to permit the operation of the energy storage system by public and private operators. This evolution of our electrical networks will deliver a guaranteed energy supply to every domestic user and business - a priority for regional energy companies and Government bodies. Whilst there are significant short term benefits associated with smart grids, the most significant gains will be realised if the key enablers of smart grids – including smart monitoring technology – are invested in today.
By Sarah Whelan, Director, Find New Business
Smarter data management and development benefits all, especially the people you want to reach, the sales people being asked to produce sales, the marketing people who are tasked with reaching the correct target organisations and the business managers and owners looking for new clients to work with!
The best or most appropriate leads are often the hardest to find and even more difficult to reach. If they’re not hidden among hundreds of unqualified details, prospects are protected by well-trained PA’s who can often prevent legitimate callers getting through. What if a new supplier could introduce a new product or service that could lower costs and bring about business enhancing solutions but couldn’t get through?
In business, far too much time is still spent chasing lukewarm or cold prospects – i.e. people who really don’t want your product or service and probably never will (sound familiar?). The days of buying a poorly targeted databases or attending events hoping the right person will turn up on your stand and picking random names and numbers from directories on the off-chance they might need what you’re selling are long gone – hoorah!
Some leads are better than others. Just ask any sales person. But from the hundreds/thousands of potential leads given to them how do they know when one lead will be a great prospect and when another will be a total waste of time. That’s where lead profiling and scoring comes into its own.
BIG DATA - Modern marketing databases can be huge. Depending on what tools you’re using, you probably have all sorts of information on leads including events attended, enquiries made, visited, emails opened, videos watched to name just a few. That’s more information than anyone can manually sort through. In fact, sales team are probably wasting a lot of time chasing (or bothering) prospects that are never going to buy. Why? Because you don’t know exactly who you are targeting, what they need, or why they buy.
Lead profiling and scoring is a way of measuring and then ranking your leads based on who they are and what they do. It’s a bit like a game of snakes and ladders. There are certain moves which will accelerate someone forward quickly and others than can push them towards the back of the queue. Lead profiling and scoring creates an official definition of a high-value lead versus a low-value lead based on historical performance and conversion rates of past leads with similar characteristics. If you are not being smart (think smart cities), you will be wasting your time and money, missing out on opportunities with a higher chance of becoming closed-won deals.
All marketing departments really wants is for sales to work hard on every single lead that is handed to them. They’ve spent a lot of time and money generating those leads and they don’t want that investment to be wasted or ignored. With a mutually agreed upon lead profile and score, any marketer can be confident that sales is immediately chasing the hottest leads first and putting in effort where it counts. These top-scoring leads should fit your company’s profile of an ideal customer based on historical performance. For sales persons, lead scoring makes their job much more effective. Creating a standard criteria for leads helps them differentiate between an incredibly valuable lead that needs to be worked hard, and a so-called lead that they shouldn’t pursue at all. This means that when a sales person comes in to work and faces a queue of leads, they won’t give up in despair. Instead of feeling overwhelmed and overworked, they can simply sort the leads top down by score, making it simple for them to do their job and do it well.
Lead profiling and scoring doesn’t just makes sales and marketing teams work better; it makes the two teams work better together. A clearly outlined lead scoring system stops some of the most common issues between sales and marketing about the quality of leads.
Lead profiling and scoring is the key to sales and marketing alignment, because it helps define the best leads using data and objective analysis. It should be a major part of your service-level agreement, specifically outlining how leads are worked, what lead score is acceptable, and why. Most importantly our valuable prospects get the headspace needed to choose the right product or service, whether it’s a brand new technology or just find a more reliable supplier that might actually bring them hero status in their organisation along with great business benefits!
Dear DCA,
I wanted to tell you about our success working with one of the DCA’s strategic lead generation partners ‘Find New Business’ in assisting us with our growth plans by producing qualified appointments for our new business team to sell to.
Working with Find New Business has been an eye opener! The FNB team have helped us to really focus on our ideal client, in other words who would we most like to be doing business with that we are not at the moment. They immediately from day one started to generate qualified appointments led by their industry sector specialist telemarketing team. This team is extremely well trained both by their own internal managers but also by the DCA with training on the industry sector which means they understand the data centre sector making them sector expert telemarketers that are not available anywhere else in the UK.
In particular the success we have had has included appointments with brand new clients, we have renewed relationships with some of our lapsed clients that they targeted for us and we have a healthy sales pipeline already for our racking and storage solutions and our latest innovation the micro data centre.
Well done on the strategic thinking DCA on this one and keep them coming as this has made a significant difference to our growth already.
Jeremy Hartley, Dataracks
Once the fog lifted as the third DCS/DCA Golf Day got under way, there were no weather excuses for any poor golf played – unless one allows for mild heat exhaustion, as the temperature and humidity rose. It would be nice to say that all the golf played at Hellidon Lakes was equally ‘hot’, but some of the scores suggested that some of the golfers were lost in a playing ‘fog’ all of their own making.
Forty eight data centre professionals, but definitely amateur golfers, took part in the 18 hole stableford best individual and best team (3 out of 4 scores to count on each hole) competitions, with mixed results.
At the lower end of the scale, Richard Stacey, John Booth and Dennis Coe failed to make it into double figures points-wise (no doubt plenty of double figure scores registered!); with their respective teams propping up the leaderboard – take a bow Futuretech, Riello and a combined DCA/Carbon IT/Carel fourball. At the business end of the competition, Munters provided the winner and runner up of the individual scoring, with Peter Heppenstal (41 points) beating his colleague, Jason Imi, by a single point. And it will come as no surprise to discover that the winning team was…Munters, with a score of 127 points, an agonising one point ahead of Universal (whose team members are no doubt kicking themselves over the various four footers missed during their round!), with Dunwoody a distant third on 112 points.
Happy to report, the success off the course – eating, drinking and socialising – seemed to be shared around all 58 golfers, as pre-dinner drinks were followed by an excellent meal, and plenty of post-dinner drinks, where, once again, the fog appeared to descend on those left in the bar as Thursday turned into Friday.
A big thank you to Angel Business Communications’ Scott Adams for organising the day, and to all those who made it such an enjoyable and memorable occasion.
DCS talks to Alan Dean, CEO, Coreix Ltd, discussing the maturing Cloud market and what lies behind the development of the companys’ second generation virtual private Cloud platform.
Q: What have been the latest developments in Coreix since we last spoke?
A: Since winning Cloud company of the year at the Storage, Virtualisation and Cloud awards last year, the Coreix team have been working hard to put in place our second generation virtual private cloud platform which looks to bridge the gap between public and private cloud platforms, enabling clients to have best in class high availability solutions with backup as standard.
Q: How do you see the cloud market at the moment?
A: We believe the cloud market is maturing and that there is a polarisation between public cloud which has low cost, disposable computing and private cloud which offers traditional high availability enterprise computing.
Q: What do you see as the pitfalls of using traditional public cloud infrastructure migrating legacy infrastructure to the cloud?
A: Generally, public cloud solutions are low cost with local storage hypervisors provided on a best effort basis. While these allow for the flexible creation and management of cloud instances they require you to fundamentally change the way you work.
For example high availability is achieved by creating multiple instances over multiple zones with a client managed load balancing or clustering solution. Backups and support are generally optional with instances self managed by the client as are the load balancing or clustering solutions. The failure of a single hypervisor in a public cloud platform is considered a certainty in time and clients are expected to recover from hypervisor failure with backups or by failing over to another node either manually or by using code.Companies migrating legacy infrastructure to a public cloud struggle to balance a traditional software architecture which is based on scaling up compute resources within limited instances.
For example, increased CPU performance, amounts of RAM and hard drive spindles with the requirements to rely upon a larger number of smaller instances spread over a number of zones to achieve high availability.
This often means a choice for the business between adjusting their software processes for example, moving from two large SQL databases to a sharded database infrastructure spanning six or more instances ideally in multiple zones or to try to use traditional methods, but without the traditional high availability disk arrays sitting behind the instances.
Many companies who find this solution do not fully meet their business needs look to a private cloud as an alternative. This gives the high availability technologies required, but as a higher entry cost point and also sacrificing much of the flexibility of a public cloud system. We believe virtual private clouds are the answer to this problem as they can offer the best of public cloud platforms along with the best of private cloud platforms.
Q: What have you been looking to do to shake this up?
A: Over the last year, we have been looking to pioneer a third option which looks to give the low cost of utility of public cloud combined with high availability and performance of private cloud. Our virtual private cloud platform offers the ideal solution for small to medium sized businesses looking for reliable, scalable hosting solutions. As standard, we offer them the following:
Our new virtual private cloud has been redesigned from the ground up and should offer full flash storage, guaranteed minimum RAM and hard drive IOPs as standard. The platform will also offer hypervisor failover, backup and load balancing as standard.
Q: How do you see Virtual Private Cloud addressing customer needs?
A: We feel that our Virtual Private Cloud gives an optimal balance between flexibility, performance and cost for SMEs. It can also be used effectively when combined with a private cloud infrastructure to add an additional scalability on a non-disposable basis.
One of the core issues which we see with our customers when looking to migrate legacy infrastructure include a cloud environment is the lack of knowledge on how best to prepare the infrastructure to understand what level of resources are required for the cloud provider.
With regards to auditing and managing the migration process itself, we collaborate with our strategic partner Digital Craftsmen who enable customers to migrate and operate in the cloud with ease.
Q: In looking to shake-up the market, what support can you offer customers that they may not get elsewhere?
A: At heart, Coreix is a Managed Services provider, in the past ten years we have excelled at offering managed dedicated servers for mission critical projects. We’ve continued this level of service with our private cloud solutions and are now looking to bring this to the public cloud. We know that for most businesses every cloud instance matters, we make sure they stay up at an infrastructure level so you don’t have to code around disposable infrastructure.
Technical expertise and support you can rely on 24x7x365
Coreix has been providing managed services for over 10 years.
This experience allows us to have best in class support for our cloud support as standard.
Backup and recovery as standard
We automatically backup client data on a daily basis as standard, and we can offer advanced backup retention and services. If the worst does happen, we’ll manage the restoration process.
High Availability
At Coreix, we offer automatic failover in case of host failure to ensure that your business does not offer a moment of downtime should the worst happen.
Dedicated Account Manager
As an extension to your business, your dedicated account manager will listen to your needs and develop a solution which matches your business objectives.
Cost effective
Our Virtual Private Cloud enables you as a customer to be safe in the knowledge that there are no single points of failure and there is a low set up cost helping you to focus on what matters most.
Q: How does Virtual Private Cloud differ from traditional public cloud?
A: Virtual Private Cloud differs from traditional public clouds in the following ways:
High Availability as standard including:
Q: What is the pricing based on?
A: Prices are based on bundles of resources allowing flexibility of RAM to CPU to hard drive ratio on a per instance basis.
This includes:
Q: How does virtual private cloud bridge the gap between public cloud and private cloud?
A: We feel that Virtual Private Cloud gives an optimal balance between flexibility, performance and cost for SMEs. It can also be used effectively when combined with a private cloud infrastructure to add an additional scalability on a non- disposable basis.
Q: What are the main advantages of using your private and virtual private cloud platforms over others?
A: We believe the advantage of using Virtual Private Cloud is that many customers will be able to lift and shift their existing infrastructure without having to make changes to the way they do business. These platforms combined with our migration services offer an extremely cost effective way of migrating to the cloud.
Q: Could you tell us about how you can help organisations with their scalability requirements?
A: Virtual Private Clouds can help scalability for all types of clients. For SMEs, it allows clients to start their project small and grow flexibly, in time they can move to a hybrid model with some dedicated resources, or, to a full private cloud in a seamless manner. For Enterprises, they can connect the Virtual Private Cloud in to their existing private cloud and use it for expand their solution on either a temporary or permanent basis while retaining an enterprise feature set.
Q: What will you be looking to put in place to ensure that cloud migrations run as smoothly as possible?
A: A project team
Collaborative effort between the client, Coreix and Digital Craftsmen. The project will define the scope, assign responsibilities, and establish milestones and deliverables.
The success to any migration is open and transparent planning.
Digital Craftsmen will investigate, develop, test, and implement the cloud migration, have years of experience in completing this level and type of project. A comprehensive investigation; understanding the scope of the work involved, critical systems, integration points with suppliers and existing IT environments, any legal and regulatory compliance, and a comprehensive assessment of the risks are critical components to a successful cloud migration.
Security controls
This ensure that client data is stored, backup and transmitted as securely as possible on a 24x7 basis. This ensure that they are protected at all times and that only approved and authorised individuals have access to their data.
Q: We hear you will be partnering with Digital Craftsmen around this. Could you tell us a little bit about Digital Craftsmen?
A: Digital Craftsmen help organisations maximise performance and profitability by providing bespoke hosting and managed services. Their clients include insurers and financial institutions, digital agencies, safety auditors, e-learning firms and public sector organisations.
Digital Craftsmen help organisations in the FinTech sector amongst others manage the risks associated with moving to and operating in the cloud ensuring that they do not experience a moment of downtime. They look to provide insight and expertise through the following process:
Q: Could you tell us a little bit around this partnership?
A: Finding a flexible Managed services supplier with whom Coreix can offer individual service level agreements was paramount in offering a solution to fit their customer’s needs.
Supporting customers with a cloud migration strategy or working in the cloud, requires a specialist service that can and should be tailored to a customer’s individual needs, rather than a one size fits all approach.
Over the past five years, Coreix has been working with Digital Craftsmen. Digital Craftsmen work with us to create a bespoke solution that meets both our clients unique requirements as a business and your budget. As cloud requirements change which they almost certainly will in a highly dynamic market, that solution can be scaled up or consolidated downwards for greater cost control.
Q: You mentioned the importance of transparency in your last interview?
A: Yes, as a company, we offer transparent solutions which match our customers business requirements. This enables customers to be safe in the knowledge that you know the number of IOPs which you will need for your hard drive, CPU mark, level of support and pricing structures. We believe to achieve this we need to offer guaranteed resources in a logical manner. Processing power needs to be measured using industry standards (ie CPU Mark) not intangibles such as a number of cores. Storage space needs guaranteed performance (IOPS), not just a capacity. With transparency you can review your existing usage on legacy equipment using standardised tools and make an informed decision on the resources you need on your Cloud platform.
Q: What do you see as the pitfalls of using traditional public cloud infrastructure migrating legacy infrastructure to the cloud?
A: Lack of scalability in legacy infrastructure
A significant proportion of legacy infrastructure is designed to scale vertically (more processing power or more storage space/performance) not horizontally (more standardised instances in a cluster).
Transposing of old or legacy designs or frameworks
Legacy infrastructure may not be supported by vendors and frameworks may be out of date.
Access controls
Many proprietary systems are designed and licensed for use on a single high availability machine, or perhaps, two machines with replication. These systems are not suited to the disposable computing public cloud model as they cannot scale to multiple instances or zones.
Lack of collaboration
In a self service environment your technical team is often left to manage all aspects of performance and reliability at a code level. They need to an expert at cloud technology and how to work within a best effort computing environment while also being an expert in your business processes and systems. With Virtual Private Cloud we collaborate with you to guide you to the right infrastructure for your business.
Q: Your colleague, Paul Davies, mentioned the importance of transparency in your last interview?
A: Yes, as a company, we offer transparent solutions which match your business requirements. This enables customers to be sage in the knowledge that you know the number of IOPs which you will need for your hard drive, CPU mark, level of support and pricing structures.
Q: Are you still involved with the G-Cloud programme and how are you finding it?
A: We continue to be involved with G-Cloud and are on the eight iteration of the programme. This enables us to offer Infrastructure as a Service solutions to clients in the public sector and government departments.
Q: We hear you will be exhibiting at a number of upcoming events, could you tell us about them?
A: We will be exhibiting at IP Expo, which will be held in London’s Excel on the 5th & 6th October. Simon Wilcox, MD of Digital Craftsmen and I will be giving a talk at 11.40am in the DCIM theatre. We are looking to inform and educate IT Managers and Directors on how to bridge the gap between public and private cloud infrastructure and look to give insights into how to migrate legacy infrastructure to the cloud with ease.
A paradigm shift is underway in the IT industry that is changing the way organizations of all sizes approach the enterprise data center. Traditional IT infrastructures are not equipped to handle challenges associated with the exponential growth of data and ever-increasing user demands for availability, security and simplicity. To keep pace, IT decision makers require solutions that reduce infrastructure costs while at the same time increase agility and operational efficiency, which has led to growth in shared compute and storage platforms instead of disparate systems. But IT requires smarter infrastructure solutions that virtualize and collapse all key data center components such as compute, storage, networking and associated software into a simple, scalable, single server-based model. By Ron Nash, CEO, Pivot3.
That’s where hyperconverged infrastructure (HCI) comes in. HCI combines compute, storage, and storage networking on standard x86 hardware, effectively reducing or eliminating the need for traditional discrete IT components. Multiple vendors in the space claim their HCI solutions reduce cost and complexity in the data center, with features such as data protection, the ability to pool system resources, provision and scale on the fly, and provide out-of-the-box functionality and single-pane-of-glass management for the entire environment. The reality is, however, that not all actually deliver on the promise of hyperconverged.
The hyperconverged market is heating up, gaining significant traction and adoption in both midmarket companies and enterprise. Infrastructure and operations (I&O) professionals continue to look to HCI as the means to address top challenges such as simpler operations, tighter virtual machine integration and more robust data protection. According to a recent study by IDC, more than 32 percent of enterprises are planning major storage and server replacement projects this year, and out of these, 86 percent want to increase their investment in HCI. Gartner predicts that HCI will represent more than 35 percent of total integrated system market revenue in just three years, building towards 67 percent of a $35 billion market in 2021. As IT attempts to navigate this emerging marketplace and evaluate available solutions, there are some key points to consider when looking to implement HCI.
One of the first hurdles customers face is whether they have to rip and replace their entire system, which carries with it new skills and management platforms, vendor lock-in and a potentially great financial risk associated with sunken costs in existing IT resources. The flexibility and scalability promised by HCI is one of the main reasons early adopters have chosen to deploy hyperconvergence, but not all vendors are able to provide that kind of solution, where “modular building blocks of IT” are added incrementally into your environment in a “pay as you grow” model. The reality is most companies will need to grow into a hyperconverged environment over time, and as such, one of their first assessments should be whether the solution integrates with existing infrastructure resources.
Hyperconverged solutions should be robust enough to work outside a self-contained system so customers are not locked into their infrastructure, new or legacy. HCI solutions that are only optimized to run standalone workloads on a single infrastructure actually prevent customers from realizing the full value of their investment in new technology. Adopting HCI should not only work well within one’s own architecture, it should be pluggable into existing systems that stretch their benefits outside their IT ecosystem so organizations can flexibly build out infrastructure based on evolving business priorities and customer needs.
The challenge facing many HCI deployments today is that certain applications need dedicated storage performance, or additional capacity, but not compute, or different combination of data protection, performance and scale. Most conventional HCI solutions don’t provide a way to extend their systems to leverage shared storage, and those that do add another layer of complexity and interoperability due to a separate management UI of that that system with different features.
These systems cannot deliver an infrastructure that is optimized for particular environments to provide the right resources for the right workloads at the right time. At the bottom line, it’s about control – which some in the industry are calling ‘composability’, or the ability to seamlessly orchestrate performance assurance capabilities, management, deployment settings and policies for data protection. To avoid a rip-and-replace operation and sensibly grow into a hyperconverged environment, you need to choose a vendor that offers a platform with dynamic capabilities.
Reduced complexity associated with siloed IT operations, as well as increased agility, simplicity and reduced requirements for power, cooling, and rack space are common benefits touted across the integrated systems market, but other key benefits like workload utilization, resiliency and performance can be gating factors for customers beyond single use cases like virtual desktop infrastructure or disaster recovery deployments. The enterprise data centers that power global business services and applications and bring our computing devices to life require solutions that simplify operations across the board, from backup and recovery to restoring virtual machines, but it’s also about going above and beyond running standalone workloads on a hyperconverged infrastructure.
The fact is that limitations still exist with certain HCI platforms. A majority of HCI deployments only target a focused set of workloads and can’t prioritize performance and data protection to business-critical workloads. In this sense, hyperconvergence should be dynamic; the true value is found in its capabilities to support the full spectrum of data center applications that drive business and are suitably designed for an organization’s unique IT environment. So what makes a hyperconverged solution dynamic? Here are few questions to consider in your HCI evaluation:
The right hyperconverged platform allows customers to forego the need to constantly reconfigure IT for certain purposes and replaces it with an on-demand, dynamic system with built-in automation and orchestration capabilities. With storage and compute as independent building blocks, a hyperconverged solution can support multiple external compute platforms and storage arrays so businesses can embrace a move towards a more software-defined strategy where partnering with leading providers of both software and hardware help IT build best-of-breed solutions to provide maximum interoperability and functionality. Such a dynamic system quickens the pace of IT evolution, going further than disruption in key markets to creating new opportunities for innovation.
As hyperconvergence continues to disrupt traditional approaches to data center architecture, solutions with features and capabilities that support a more dynamic approach to overcoming the challenges associated with management silos and islands of storage and hardware will help organizations move towards more digitally-oriented business models.
A truly dynamic infrastructure must keep pace with the growing amount of data the Internet of Things will generate and that IT is expected to manage. It must also scale sensibly over time to support an effective strategy for replacing aging, overly complex data center and legacy storage technologies. With such a solution in place, the migration towards the software-defined data center becomes less of a risk and more of a viable avenue for bolstering effective business processes.
The predictions as to the size and potential for the Internet of Things phenome are staggering. Analyst firm Markets and Markets predict it will be worth $661.74 billion by 2021 while Gartner suggest over 20 billion devices will be active in the IoT ecosystem within 4 years. Yet for all of the potential, the impact has not yet reached the broad enterprise market except for a few vertical sectors such as energy, utilities and logistics. For many organisations, there is a long way to go to upgrade systems with the connectivity required for IoT along with software systems able to underpin innovative new use cases that deliver the cost saving and efficiency the concept promises. By Derek Watkins, Derek Watkins, VP of Sales EMEA & India, Opengear.
The IoT revolution is predicated on a number of key enablement criteria. These are fundamentally data connectivity of fixed and mobile devices, device intelligence able to provide sensor readings and accept remote commands and action tools able to turn the masses of data from these IoT devices into new insights that power beneficial new services. Across many industries, IoT pioneers are starting this journey with smart consumer appliances able to alert manufacturers before a breakdown so as to enable preventative maintenance, smart traffic lights and vehicles that communicate with road management systems to ease congestion and many use cases around logistics management.
One segment where IoT is having an impact is in the delivery of information technology. Although a new name, many of the fundamental IoT concepts of smart devices, connectivity and intelligent analytics are already the domain of the IT industry. Large enterprises, telecommunication providers, and hosting providers have for many years relied on automation along with remote remediation to deploy and troubleshoot vast centralised data centres and remote, branch offices and cabinets. With the growing dominance of IT across all forms of business activity along with constant pressure to do more with less, IT departments, without even giving it a name, for many years have been IoT pioneers. As more non-IT devices start to gain the connectivity and intelligence demanded by IoT, the IT department is fast becoming an early adopter of the technology in more innovative ways.
The adoption of IoT within the IT infrastructure industry is a natural progression. In the 1990’s as VoIP rose to prominence, many IT departments subsumed the role of separate telecoms teams. In recent years, the use of IP within CCTV has brought this area into the delivery responsibility of IT. As the data centre has grown in importance to business performance and continuity, IT departments are increasingly required to remotely control devices in facilities they have limited physical access. This remit of this new role is growing and sits well with the technical ability offered by IT departments and also the existing tools such as building and property management systems that are already maintained by IT for facilities management teams.
At the heart of many of these existing management duties is the use of out-of-band (OOB) management required to guarantee administrative access to critical elements such as network switches, routers, power distribution units and a growing number of security appliances such as firewall and encryption devices. Although most of these devices have an in-line IP connection for basic management, many of the critical admin functions such as firmware upgrades are only available via serial console ports. In addition, if IP connectivity is lost or a device is “bricked” as in totally non-responsive to commands, only a hard reboot via control over power can sometimes only be achieved through smartOOB via a separate path such as backup ADSL, dialup, 3G/4G link.
Yet in a mirror of how new categories of devices have gained IP connectivity, a new class of smartOOB solutions have extended the number of and types of devices to which they can connect. The list is extensive and includes CCTV camera networks, vending machines, Auto teller machines, gambling terminals and HVAC amongst a growing list. In certain areas, especially with high CAPEX items such as industrial plant, smartOOB is a much more cost effective option for adding connectivity to an existing device than buying a new model. This is a great boon for IoT projects where devices need connectivity across a flexible path such as 3G and 4G mobile networks as well as mobile applications within vehicles.
Some smartOOB solutions have also undergone a renaissance in terms of openness. This has been on two fronts. The first is through the opening up of the devices through API’s that allow organisations to create flexible scripts that can automate many troubleshooting functions based on responses from device. The other key element is the use of common standards such as SNMP which allow tighter integration with well used network management platforms such as Solarwinds and Nagios that are increasingly offering remote management capabilities for IoT devices.
This emergence of IT departments embracing smartOOB as part of what are in effect IoT applications is evident across the data centre and service provider community. For example Sohonet, a highly regarded network services provider to over 400 media and entertainment companies including BBC Worldwide, HBO and NBC Universal uses smartOOB to manage remote devices at 60 PoP’s across three continents. As Ben Roeder, CTO at Sohonet explains “Hardware will fail eventually and having console access to manage remote configuration changes, power cycles or even for drop shipping new hardware that we can setup over remote console is a simple but critical requirement.” For Sohonet, its smartOOB from Opengear has been integrated into its own in-house created network management system to help troubleshooting and automation.
For TMV Control Systems, a manufacturer of critical systems for trains, smartOOB is outfitted with its Traction and Engine Control Unit which gives its railway customers remote monitoring for engine systems via 3G wireless connectivity, GPS capabilities, and VPN gateway. “Data that used to take hours to collect is now automatically retrieved every few minutes,” says Isaac Sutherland, a software developer, for TMV Control Systems. “Health supervisors, equipment owners, and maintenance staff now have online access to data that is just minutes old. Problems can be identified and analysed as they occur so that remote support staff can provide timely solutions.”
In the example from TMV, it is the ability to use the cellular network which is vital. As cellular networks continue to progress through generation with lower bandwidth cost and performance; this transition to using these networks for IT administrative purposes will grow. Yet with all the hype around the potential of IoT, those on the ground charged with making it happen are still constrained by budgets. This is where smartOOB has strength through relatively low cost.
With entry level units including 3G/4G connectivity starting at a sub $500 price point, IT departments in particular are looking at how they can ‘dual use’ the technology across both existing network and system elements and new IoT use cases in areas like CCTV, access control and HVAC. IoT may well be a half a trillion dollar industry in a few years but for now, its use by network and IT admins to better manage critical systems is making a real impact.
Online cloud-based remote monitoring platforms are an essential tool in the efficient management of mission critical data centres, but care must be taken at the deployment stage to guard against security vulnerabilities. By Torben Nielsen, Senior Research Engineer, IT Business, Schneider Electric.
Centralised monitoring stations, frequently operated by specialist third parties, are a familiar, efficient and effective means of managing the operations of mission-critical IT facilities. In recent years remote, or online monitoring has evolved rapidly from a method in which centralised management teams were provided by email with intermittent status reports to more sophisticated real-time systems providing constant monitoring through the use of cloud services, data analytics and mobile apps.
Thanks to such software, many issues concerning capacity or failure of a single piece of infrastructure equipment can be anticipated and rectified quickly, downtime is reduced, mean time to recovery is shortened and energy efficiency for all systems including power and cooling is improved. However, a disadvantage is an increased vulnerability to cyber attack, which is a growing problem for all connected businesses in the world today.
Juniper Research estimates that the cost of data breaches will reach $2.1 trillion globally by 2019. Naturally, there is a large and growing arsenal of counter measures available to guard against unwarranted intrusion into one’s vital information systems. But in the case of the digital remote monitoring platforms on which many data centres rely for their effective operation, special attention needs to be paid at the development stage to ensure that they are as robust as possible in the event of any attack.
DevOps, or development operations, is one of the latest modern approaches used to prevent cyber attacks from causing theft, loss of data and system downtime. In this instance dedicated teams are deployed to bridge the gap where development and operations entities were traditionally split. This ensures a stronger level of communication and collaboration in the effort to protect an organisation, user or platform from cyber attack.
Although the primary responsibility for the development of a monitoring platform lies with the software vendor, data centre operators must nevertheless be able to evaluate the effectiveness of such systems not just on the basis of features and functions but also on their effectiveness in terms of security.
Knowing how secure a monitoring platform is requires an understanding of how it is developed, deployed and operated. A recognised standard, ISO 27034, provides guidance on specifying, designing, selecting and implementing information security controls through an organisation’s system development life cycle. Data centre operators evaluating remote monitoring platforms should ensure that they are developed using a Secure Development Lifecycle (SDL) methodology that is compliant with ISO 27034.
A typical SDL methodology is based around eight key practices, namely: training, establishment of requirements, design, development, verification, release, deployment and response to incidents.
A vendor should have in place a continuous training program for its employees covering all aspects of designing, developing, testing and deploying secure systems. It should also have in place procedures to ensure that its employees do not become themselves the vector for a cyber attack. Quite apart from identifying potentially malicious employees via proper recruitment and vetting procedures a vendor should be able to satisfy a prospective customer that its employees are continuously trained on aspects of cyber security based on their role, whether they are a developer, operator or field-service technician. There should also be a hierarchy of access privileges, with the vendor’s employees only given access to IT and network functions and resources that are needed for them to perform their job.
Cyber security features and customer security requirements should be clearly stated and documented at an early stage of product development.
At the design stage, these features and requirements should be encapsulated in documents describing an overall security architecture. Threat modelling, a structured approach to identifying, quantifying and addressing security risks, should then be enacted to ensure that security is built into the application from the very start. Threat models look at the system from the perspective of an attacker, rather than a defender, thereby enabling developers to counter the threats once they have been revealed by the modelling process.
When adding remote monitoring to a data centre, it is important to consider the security aspects of the connection method. Some (older) systems require each monitored device to connect directly to the internet. This adds a significant security risk, since each monitored device is exposed to cyber attacks. A much better and more secure approach is to use a dedicated gateway to connect. In this instance a continuous stream of informational data is gathered and sent through a secure gateway which then transmits it outside the network or to the cloud. This data is monitored and analyzed by people and data analytic engines. In addition to the data stream, there is a feedback loop which runs between the monitoring team and systems, directly back to the data centre operator. The user(s) have access to monitoring dashboards via the gateway or to the platform’s cloud when outside the network via a mobile app or computer, allowing realtime decisions to be made.
From the point of view of a remote monitoring platform, certain specific elements should be of special concern from a security perspective. The gateway that collects data from within a customer’s data centre should allow outbound connections only; it has no need to allow inbound connections and so they should be prohibited, thereby removing the gateway as a conduit for attack. This gateway should also be the only one to initiate connections to the outside. No one outside of the gateway should be able to connect to it first.
The platform should only communicate over secure protocols, such as HTTPS, to protect the confidentiality of data in transit. As an additional safeguard, all authentication procedures should be multifactor in nature; a simple password should not be sufficient. Sensitive data should be encrypted both in transit and at rest and the platform’s source code should be compliant with security standards such as NIST SP800-53 Rev 4 and DISA STIG 3.9. All code changes should be peer-reviewed before being accepted. The development stage following this detailed design process implements the design into code in accordance with best practices and coding standards. Once completed, verification can then take place by testing the coded product against the threats anticipated in the threat models to ensure that the platform is resilient.
Recommended testing practices during the verification stage include static code analysis, penetration testing and continuous security scans. Static code analysis is a means to identify weakness in the source code itself prior to deployment. All code should be scanned prior to each build to eliminate coding weaknesses.
Penetration testing simulates the typical methods of attack that malicious intruders might adopt. Testing can be done from the perspective of an external, or Black Box, attacker or from that of an insider or White Box attacker. Test teams should be separate and independent from the development team and specially trained in penetration testing. Continuous security scans should be performed after a product has been deployed to test for new vulnerabilities. This should be done using scanning tools that look for publicly known security vulnerabilities.
The release stage requires that security documentation be developed describing how to install, maintain, manage and decommission the solutions that have been developed. Deployment requires the project development team to be available to help, train and advise service technicians on how best to install and configure security features.
Once deployed, an emergency response team should be put in place by the vendor to manage vulnerabilities and support customers in the event of an incident. Ideally, this should be a DevOps team, comprised at least in part, of people who developed the system.
The DevOps team should have three basic functions, namely to detect security weaknesses by continuously scanning for vulnerabilities or anomalies; to react to threats on a 24x7 basis; and to provide remedies for any frailties discovered. This includes patches for software vulnerabilities as well as new developments in the face of new threats. DevOps teams have to focus on two key metrics: Mean Time to Detect and Mean Time to Recover. These should be focused on network security, ensuring that remote attacks arising over the network are neutralised; and physical security which concentrates on preventing unauthorised access computers within a premises itself. To this end all developers and operators should be required to secure laptops with disk encryption, use a local firewall, have strong passwords and enable screen lockouts after a short timeout period.
Data centre operators making the choice between remote monitoring solutions should work in partnership with developers to ensure that the benefits of continuous systems monitoring are not compromised by security flaws. The consequences of choosing a structure without sufficient security can be quite severe, therefore development and deployment of specific safety processes at the design stage have become paramount to ensuring business critical infrastructure remains safe from cyber attack.
Chris Wellfair, Projects Director at data centre designer and builder Secure I.T. Environments, talks about the benefits of multi-storey data centres, and offers up some points for those looking at a new data centre to consider.
There comes a time in every data centre’s life where it takes a look in the mirror and realises that it’s getting on a bit and maybe even struggling a bit with its waistline – something we all can (or will) relate to. As a business grows, and it’s use of digital technology increases, more servers are needed in the data centre, but there is a limit. Often the continued miniaturisation of tech counteracts this, but what do you do when you reach the physical limitations?
Simply extending the data centre sounds obvious, but quite often, if the data centre is a room in a larger building that will mean displacing other staff, or perhaps even moving to a different part of the building. An incremental change may offer more space for five years, but is that making bigger problems further down the line?
Modular data centres are now extremely popular. They are secure, energy efficient, and offer a way to quickly build a new data centre environment whether as a ‘room in a room’ configuration or as a free-standing structure. This is great if you have the space to accommodate the footprint required, not just for the server rooms themselves, but also the plant equipment, storage, access and perimeter security. Once you hit a point where a data centre needs to be a free-standing structure, the space requirements are magnified. You may for example need to consider a new access road and paths, or plant acoustic screening if you are in a residential or conservation area. For clients that are based in city centres, or where either land is in short supply or priced at a premium, this can seriously restrict the development options of a data centre. That is not to say that one can’t be built, but will it have the spare capacity to serve the business in five years?
This is where the multi-storey data centre can really be a saviour, addressing all of these issues by dramatically reducing the foot print requirements to those of a single freestanding room. Built correctly it is even possible to place plant equipment such as air conditioning condenser and generators on the roof surrounded by a parapet.
One misconception we often hear is that multi-storey data centres are less energy efficient than other data centres because rooms are stacked on top of each other and heat rises. However, if you think about this for a moment, it doesn’t make sense. If the correct approach is taken to cooling, heat is extracted directly from cabinets and the room beyond the building structure for processing.
It doesn’t get the chance to rise in the way many think it will. That heat can also be used to good effect if there are offices in the structure that need heating, such as a network operations centre.
There are a number of considerations for those looking at exploring a multi-storey data centre as a solution to their space woes. The first and probably most important is cost. As a general rule of thumb it costs approximately 25 per cent more to build a vertical data centre than if those rooms were set side-by-side. Of course you may make other savings, simply by the fact that you need less land, and can build in enough spare capacity that you don’t need to consider moving the data centre for 15 years. A vertical data centre has a much longer usable life.
The main additions to the cost are the additional requirements in construction related to ground works, and the reinforced steel structure to ensure it not only meets building regulations, but is equipped to deal with other risks such as fire and flood and even seismic events.
Your planning department will be able to help you assess any issues related to constructing your data centre. If there are previous applications that have set a precedent on the specifications of such constructions they will be able to share these with you. Also, there will be special considerations if you are in a conservation or residential area, not just external appearance, but sound levels at different heights and distances.
Your designer should be thinking of all these things and including them in any planning application. The more you can demonstrate you are thinking about the impact on the surrounding environment and residents/businesses the better.
Any issue can be overcome by working closely with planning; they’re not the enemy they are often portrayed as. The same is true of building control who can be extremely helpful with finalising designs and highlighting potential headaches.
As with any multi-storey building the standards required for foundations are substantially higher spec than a single story data centre. Ground and site surveys will highlight any concerns, but you may be required to reroute sewer or other services, or have to consider a specific approach to the foundations in order to accommodate the composition of the ground.
A full risk assessment should also be conducted with respect to flooding, fire, and the location and type of other buildings. How do the crime rates in the area impact your security specification? How far away from a railway track or busy road should the facility be?
We’ve covered some of the main considerations to getting a project started but there are some things that clients don’t think about as costs in building a multi-storey data centre. One of these is how access to upper levels impacts the footprint of the build.
For example, do you need a goods lift and what size/spec should it be? How will staircases and walkways impact the data centre, and what about emergency exists on all levels, including the roof?
If a parapet is built to contain plant equipment, how will this be secured from theft and will adequate fire sensors and bells be in place, so that in the event of a fire workers on the roof are alerted to evacuate. Rare maybe, but very possible.
A multi-storey data centre might be the only solution for some companies, if they do not want, or cannot afford, to move their whole IT operation (or business). But it can be a cost effective way to build a data centre with a much longer lifetime and capacity than a traditional build.
They’re not just something for the likes of Google and Amazon to build, they are well within the reach of the rest of us, and with the right planning can be a huge success when a company is planning for the long-term.
One of the (many) well-worn stereotypes about IT professionals centres around their preferred working environment. It has been suggested that tech pros are very much ‘indoor’ types, preferring the glare of a computer screen in a darkened room to anything resembling natural light. But what happens when these professionals are forced to leave their supposed comfort zone and consider the world outside?
By Jon Leppard, Director at Future Facilities.
In some ways, it is understandable that facilities managers and engineers in particular would have an indoor focus. After all, their main responsibility is to ensure the data centre delivers performance in line with business demands, and understanding how any changes inside the data centre environment will affect performance is key to avoiding risk and potential downtime to the business.
For this reason, owner/operators have long made use of engineering simulation - specifically, 3D modelling, power system simulation (PSS), and computational fluid dynamics (CFD) to predict cooling within the four walls of the data centre. Doing so provides a safe, offline capability to test any potential change without fear of failure or downtime.
This allows facilities managers to optimise data centre performance in terms of efficiency, resilience and conformance. However, whilst this kind of 3D modelling is well established for the internal data centre environment, the same cannot be said for the spaces where the cooling and power infrastructure resides.
While data centre innovation has for a long time focused on the latest equipment or the dynamics within the datacentre, the environment outside a datacentre has huge implications on what happens within. So how can we assess the impact of your surrounding environment on the efficiency and effectiveness of the data centre?
One of the important factors to consider is the proximity of the data centre to other buildings or structures. For example, if planning permission was granted for the construction of a factory next to a data centre which had previously been standing alone in open space, air flow and levels of pollution surrounding your facility would almost certainly be affected. Therefore, through the use of 3D modelling, engineers would be able to predict exactly how this change would affect the data centre, and take the requisite steps to mitigate any potential problems that would be caused.
This ability to map the external environment is also important for predicting the impact of building a new data centre. Apple has recently been given planning permission to open a data centre on a 197 hectare site in County Galway, Ireland. However, it intends to actually open a total of 8 data halls on the site, and will need to apply for planning permission for each one. If Apple were to use 3D simulation, it would be able to identify precisely the areas on site where a data centre could operate most efficiently and have minimal environmental impact. By demonstrating this to the planning committee they could potentially increase their chances of success.
In addition to the landscape and surrounding infrastructure, data centre performance is affected by the weather. Fluctuations in temperature and humidity impact the efficiency of the cooling infrastructure, resulting in wasted energy. With the drive towards more free cooling (either direct fresh air or indirect heat exchange) data centre performance is more than ever linked to external conditions. In changeable or extreme environments therefore, predictive modelling could be used to assess the impact of a sudden spike or drop in temperature allowing operators to act accordingly.
The examples above illustrate the importance of external environments as part of the data centre performance puzzle - ignoring it could come at a (significant financial) cost.
It’s time for techies to consider the world outside, even if it is only from the comfort of their own monitor!
Legacy debt, also known as technical debt, is a familiar problem facing data centre and IT heads.
By Dale Green, Marketing Director, Digital Realty.
On-premises architecture, requiring extensive maintenance, becomes out of date. However, because investment has been made and because organisations are now using the related systems, it becomes harder to move away. Operating expenditure continues, and IT professionals carry on spending time maintaining and troubleshooting. As a result, investment in new technologies is reduced, and resources are diverted from exploring new innovations.
It’s a strategic conundrum facing most industries. Where the consequences of getting it wrong can be highly damaging. There have been several high-profile outages affecting banks, for example. Andrea Orcel, global head of UBS’s investment bank, says: “The challenge for most banks is that they are not technologists … As technology continues to evolve at a fast pace, becoming ever more critical to their business, they are having to navigate a space that is both highly complex, and does not play to their core competencies.”
Data is another reason why legacy debt needs to be tackled. Because traditional data centres are already bulging with data. And yes, cost-per-gig shrinks when storage expands. However, the explosion in volumes (a forecasted 180 zettabytes by 2025) is something that requires a new storage model. Data centres can’t carry on acquiring more spinning disks or relying on flash. Particularly when you add the Internet of Things (IoT) into the equation. By 2020, Gartner expects more than half of major new businesses to incorporate elements of IoT.
Her Majesty’s Revenue & Customs (HMRC) is making some interesting moves regarding tackling its legacy debt. HMRC has released an IT strategy document, outlining its approach to updating its data centres.
In a statement (pdf), it said: “Virtualisation technology brokered from multiple vendors, moving to a disaster-tolerant environment with less focus on disaster recovery will help HMRC transform our data centre hosting platforms from traditionally dedicated physical resources to virtual cloud based services.”
Embracing virtualisation means the organisation can tackle its current complex structure of applications. The strategy report estimates there are almost 600, with some “built at a time when data was entered
into mainframe computers using punched cards.”
Current state of affairs
So, why is technical debt still such an issue? Part of the reason is down to resources. Finding the right professionals who can
build a solution isn’t straightforward.
What’s more, there may be other issues which are seen as more urgent. After all, spending on legacy infrastructure is something that happens behind the scenes. Any new headline-grabbing compliance directive is likely to go straight to the top of an organisation’s agenda.
Moving to virtualisation will have a major impact on budgeting. Pay-as-you-go pricing, a shift away from unpredictable capital expenditure, towards scalable infrastructure. All will all mean savings, freeing up finance for transformative initiatives such as launching new applications.
IT and data centre infrastructure faces unprecedented pressure in the face of new technologies and disruption to industries. And legacy debt is a problem that needs to be tackled sooner rather than later.
A new form of data centre can play a central role in helping organisations address the issue of legacy debt.
Iain Chidgey, vice president, International at Delphix, addresses the challenges of migrating data to the cloud and the role of data virtualisation in accelerating and securing the process.
In the last decade, successful organisations have been tasked with pushing out highly creative and innovative ideas that need to be turned into reality, and fast. At the heart of this transformation is IT, which has gone from being the support function that keeps the lights on, to a strategic team that transforms ideas into applications that add business value.
In a bid to be agile enough to keep pace with this innovation, teams are increasingly adopting cloud platforms to underpin their day-to-day operations. Used to provide transformative gains in execution and lowered infrastructure costs, the cloud has become the backbone for delivery of lightening-speed change.
In fact, Delphix’s recent research into the state of DevOps, (which surveyed 100 IT professionals in the UK) revealed that 93 per cent of UK businesses are now using either a public or private cloud, with the top three most popular cloud providers ranking as Microsoft Azure (33 per cent), Amazon Web Services (28 per cent) and Rackspace (20 per cent).
However, whilst the cloud has helped cut costs and boost flexibility, one constraint is still holding organisations back – data. The problem is usually not hardware or software-level incompatibilities with the infrastructure in the cloud; it’s simply that moving the supporting data for organisations with hundreds of applications is a huge challenge. That’s without considering cloud data security, stability and performance challenges, as well.
In reality, amidst all the hype about how great the cloud is, no one is addressing the elephant in the room: how can you move enterprise applications to the cloud? There are still IT leaders who are concerned that their cloud providers fail to appreciate and understand both the complexity of their potential customers’ applications and their fears that migration could fail.
As a result, many organisations base their new applications around architectures in public clouds but leave a huge proportion of their old applications on-premises. This is in part because manually migrating data to the cloud is enormously labour-intensive and error-prone. IT teams are often asking: How can we migrate an application to the cloud when data used by that application often runs into multiple terabytes? Trying to pump multiple terabytes of data from in-house IT over the internet to a cloud provider can take days and saturate the network. Often the most expedient way to get large data sets into the cloud is by shipping the physical media to the cloud provider, but that’s hardly the most efficient approach in today’s culture of immediacy. This task is then further complicated by the ongoing need to propagate changes from in-house databases to refresh databases in the cloud with the newest, most accurate data for use in development and quality assurance. If your data is stored in databases, you’ll need a database administrator (or a team of them) to run these tasks. After all, their expertise is necessary to ensure the transfer is performed without data loss or corruption.
One way around this is for organisations to use data virtualisation. Doing so means they can take a single backup of production databases which is then replicated to the cloud. From this cloud copy, multiple ‘virtual databases’ can be provisioned into cloud test environments with zero storage footprint and extremely fast. This dramatically increases the amount and quality of application testing, thus accelerating the cloud migration process. Using virtual copies, organisations can refresh, reset or rewind data at will. It also means that for every production environment migrated, a handful of non-production environments won’t need to be migrated at all, simplifying and speeding up the whole migration process, so that the traditional barriers to cloud migration are removed.
Copying on-premise data into the public cloud opens up an important hybrid cloud challenge around data security. Most enterprises are very sensitive to copying confidential data into non-production cloud environments, but for cost reasons they would still like to run their test and development workloads there. To avoid leaving confidential data exposed, the only option is to mask the confidential pieces of data; for instance, by replacing real credit card numbers in a database with fake ones before moving anything into the cloud.
With security breaches appearing in the news on a regular basis, organisations want to use data masking more broadly. However the ability to audit what data is masked and find all the places where data exists in unmasked form is challenging. Data virtualisation helps by reducing data sprawl to a single backed-up physical version. By integrating data masking into data virtualisation, you can mask the single physical backup and then automate the delivery of fully masked virtual copies on-demand. Combining data masking with data virtualisation saves time and storage; data is masked only once and then delivered many times. In the cloud example, this means that production data never leaves the on-premise data centre, with only the masked data being replicated – in essence a copy of a copy.
This is particularly pertinent following the introduction of the EU’s General Data Protection Regulation (GDPR) which places strict controls on businesses that collect, use, and share data from European citizens. Companies— EU-based or otherwise—face new requirements that compel them to rethink their approaches to customer privacy and implement new protections. From 2018, any organisation that collects, uses or shares personal information about European citizens will have to demonstrate compliance with the new law. This includes using various techniques to ensure that the protection of data is built into the design and infrastructure of an organisation by default. Organisations can be fined up to 4 per cent of global turnover for breaching the new laws. With the cost and pace of regulatory reform continuing to march on, organisations are unable to justify the risk of having sensitive information in the cloud.
Data masking essentially means the ability to replace a company’s sensitive data with a non-sensitive, “masked” equivalent while maintaining the quality and consistency needed to ensure that the masked data is still valuable to operational analysts or software developers. Although vendors such as Delphix have provided this technology for some time, the GDPR, which becomes law in 2018, dramatically elevates its relevance and importance. Data masking represents the de facto standard for achieving pseudonymisation, especially in so-called non-production data environments used for software development, testing, training, and analytics. By replacing sensitive data with fictitious yet realistic data, masking solutions neutralise data risk while preserving the value of the data for non-production use.
Now that the excitement around moving into the public cloud has reached established enterprises, seemingly everyone wants to migrate to cut costs and organisational inefficiencies. To safeguard against the data migration woes and overcome time constraints and labour-intensive projects, enterprises can venture safely into the
cloud using data virtualisation software to make their cloud migrations cheaper, faster and easier.
For CIOs, implementing an effective BYOD programme to harness the power of mobility sometimes feels like an impossible balancing act. BYOD requires simultaneously securing corporate data while protecting the privacy of personal data. What makes this difficult, of course, is that both live on the same mobile device. There’s a widely held belief that this is impossible – that security and privacy are fundamentally at odds with each other – that tightening up security means giving up on privacy. By Ojas Rege, Chief Strategy Officer at MobileIron.
Luckily, this is a myth. Security and privacy can, in fact, live out their lives in perfect harmony in a BYOD programme. However, like any marriage, it takes some work. And that work sits squarely on the shoulders of IT.
Failing to protect users’ privacy leads to a culture of distrust. Employees become less likely to comply with security best practices and embark on their own personal “Shadow IT” journeys, using apps and devices without IT’s authorisation. The MobileIron Security & Risk Review revealed that most companies had at least some business data on mobile devices that were not compliant with their organisation’s security policy. So failing to protect privacy is not an option. But failing to protect corporate data gets you fired, so, clearly, that is not an option either.
Though IT understands this conundrum, understanding is not the same as solving. Let’s now discuss a win-win-win program to make BYOD successful, address employee privacy concerns, and let IT professionals keep their jobs.
Marriages fail when the two parties have differing expectations of the relationship. When implementing a BYOD programme, you can manage conflicts between privacy and security by proactively making sure they are at the top of the agenda from the very beginning of the process. It is important to understand employee concerns upfront. Most employees are comfortable with their employer controlling their work data but do not want to share any data that reflects their personal lives, such as personal emails, photos, and text messages.
It is equally important to understand what your security team views as key mobile risks. Hone in on the threats and remediation mechanisms so you can determine if any conflicts exist between security requirements and privacy expectations. Understanding both sides will let you define appropriate policy and operational parameters to minimise risk from both the company and employee perspective.
Marriages fail when trust is replaced by suspicion. Transparency increases employee confidence and legal compliance. A survey of 3,000 consumers showed that 30 per cent of employees would leave their job if they thought employers could access their personal information. With so much hinging on trust, it’s important to ensure that employees are clear on what companies can and can’t see on their mobile devices. Another reason for transparency is the evolving legal landscape. Businesses should be precise in what data is monitored, collected, used and stored. If requirements are accurately identified in the beginning, businesses can be safe in the knowledge that they are only collecting what they need, using the data for its intended purpose and deleting once it’s no longer in use.
Marriages fail when the foundation is cracked. Enterprise mobility management (EMM) solutions are the technical foundation of any BYOD programme. EMM solutions separate personal and business data on the device and allow IT to appropriately secure the business data without compromising the personal data. EMM solutions also provide the ability to selectively wipe just the business data. As a result, employees can continue to use their ever-expanding portfolio of fun, personal apps without worrying about IT seeing something it shouldn’t. More advanced EMM solutions also provide IT additional privacy controls to prevent access to personal information and provide employees with a clear visual representation of how their privacy is protected.
EMM addresses the employee’s main privacy concern: the monitoring and deletion of personal data. Without EMM, the BYOD programme has no technical foundation and either security or privacy will likely be compromised.
Marriages fail when communication stops. Just having an EMM solution in place for BYOD is not enough. IT must communicate directly with employees so they understand how EMM works and how their privacy is protected.
Without communication, employees will develop their own stories about what IT can and cannot see … and the employee will almost always assume the worst.
Yes, the security and privacy marriage of BYOD can be saved. It can, in fact, thrive, but IT has to be willing to invest in upfront user research, a partnership mindset, technology to support data separation, and constant communication.
This proactive approach will foster a culture of trust and set up the BYOD programme for high adoption and success. And it is always better to start a relationship on the right foot instead of trying to patch it up later.
Q: What is the best advice you’ve been given?
A: A mentor I had at Goldman Sachs explained that there are two types of people in the world, there are the people who complain about issues, and people who propose solutions - a successful career is built on creating innovative solutions to real world problems.
Q: What is on your bucket list?
A: One day I’d like to own a massive room dedicated to aquariums where I would keep a variety of unusual fish, and maybe if I’m really lucky have some stingrays.
Q: IT Security – a ticking time bomb?
A: I wouldn’t call it a ticking time bomb, but it’s a risk that has to be carefully managed. Often security is an afterthought when developing new technology. With the current rate of innovation and technologies like the Internet of things where everything is interconnected, the way you manage, monitor and deploy updates will be crucial to security.
Q: What is your greatest indulgence?
A: I’m an aspiring foodie, so my greatest indulgence is to going to brilliant restaurants, trying new foods and experimenting at home in the kitchen.
Q: Can you tell us a little known fact about yourself?
A: I’m a total chilli addict, I recently when to a chilli festival in Milton Keynes where you could taste the hottest chillies from all over the world.
Q: How would your work colleagues describe you?
A: I would say, hard working, lots of ideas, a little bit of a pain in the, shall we say, ‘neck’!
Q: What is the one thing you wish you had known when you were younger?
A: It would be, don’t always assume that other people know what’s best for you. It’s too easy to forget your dreams and fall into your career, what is important is to do the stuff that you love.
Q: Digitalisation – what does it mean to you?
A: I think for me, digitalisation is when technology becomes integrated with every function of a business – it is changing the way people interact with technology – in some cases, dramatically changing the overall user experience. This results in a domino effect on business processes, technology deployment and people management.
Q: What makes you happy?
A: The first thing that comes to mind is my family.
Q: One tech company to watch over the coming months?
A: The next revolution in technology is happening in the container arena, so one to watch here is Docker – because it’s enabling the next wave of interconnected applications and spearheading the drive behind the next evolution in the datacentre – microservices.
The reason why Chris and I founded StorageOS was to solve the storage issues in this container space by delivering persistent storage and enabling secure data mobility for containers to move data between bare metal, virtual machines or cloud storage. So, we’re one to watch too!