Seems like everyone has an opinion as to just how catastrophic, or not, will be the after effects of either Brexit and/or Trump’s election as the US President. Having thought long and hard about both scenarios, and read plenty of negative and positive opinion, conclusion number one is that neutral, objective advice is very hard to come by. Either Brexit will be a disaster, and UK citizens will be walking around in sack cloth and ashes for years to come, or we’ll all be driving around in expensive (obviously non-European) cars, unable to spend all that money we are making as the UK thrives while Europe nose dives into a never ending recession. As for Trump, he’s either going to upset so many folk that, sooner or later, a third world war will start; or he’ll teach the rest of the world just how great, and self-sufficient, the USA can be.
Conclusion number two? Well, this is the attempt to be serious. I’d put forward the theory, by no means unique, that regardless of what actions are taken over Brexit or what happens in the USA v the Rest of the World saga, globalisation means that there will be many unintended knock on causes, and these will frequently outweigh the intended benefits. It sounds so great and simple an idea for any country’s leader to promise to protect the jobs of his fellow citizens. In reality, if this policy is pursued, then taxes and the cost of goods will have to rise to afford to pay this protected workforce. And, thanks to the global interdependency of so many industries, by protecting the jobs of a certain industry sector, there’s the very real possibility that jobs in other sectors may be jeopardised, as they supply overseas organisations.
Apologies if this doesn’t sound terribly clear, but I think that the fundamental problem with both Brexit and Mr Trump’s apparent policies (we can’t be quite sure until he’s in power and making real decisions!) is that it takes no account of globalisation, which is bigger than any one government or country. Indeed, there are several fascinating books written suggesting that the global mega corporations are actually the ones who have more power and influence in the 21st century than any one politician. Globalisation leads to polarisation – more and more power with fewer and fewer organisations and individuals, more and more jobs and wealth created in fewer and fewer cities, regions and countries.
Quite how one both allows a free market and keeps such developments in check is the sort of problem that might well have kept King Solomon awake at nights. But, if the downtrodden, poor and unemployed masses who feel aggrieved at the idea of ‘foreigners’ stealing their jobs, education and health systems think that either Brexit or Trump can deliver a utopian UK or USA, where London and the south-east, or a handful of US cities, are ‘diluted’ so that their disproportionately successful economies and wealth can be more evenly distributed across the whole of their respective nations, then I suspect they are in for a shock.
Brexit and Trump may both be successful, however this might be defined, but they are unlikely to turn back time. Ironically, the best way to combat the ‘threat’ of globalisation and monopolisation is by groups of countries working together to regulate the activities of those who seek to benefit, at any cost, from these trends. So, leaving the EU, or pursuing a path of ‘splendid isolation’, is not the answer.
As anyone who knows their war movies and westerns knows, a town, region or country divided falls easy prey to the enemy. Bring in Clint Eastwood to get everyone working together, and the enemy is quickly shown the exit!
And what has all of the above to do with the digital world and the IT industry? Well, it is both a part of the globalisation ‘problem’, but also a part of the potential solution. And it will be fascinating to see how the Digital Age is managed by governments across the globe, unable to resist market forces, but just maybe able to ensure that the right amount of taxes are paid (!) and that the overall impact of digitalisation is a positive one, whereby technology is used to bring people, communities and countries together, and not to turn us all into our own little, self-centred islands.
Angel Business Communications Ltd, 6 Bow Court, Fletchworth Gate, Burnsall Rd, Coventry CV5 6SP. T: +44(0)2476 718970. All information herein is believed to be correct at time of going to press. The publisher does not accept responsibility for any errors and omissions. The view expressed in Digitalisation World are not necessarily those of the publisher. Every effort has been made to obtain copyright permission for the material contained in this publication. Angel Business Communications Ltd will be happy to acknowledge any copyright oversights in a subsequent issue of the publication. Angel Business Communications Ltd © Copyright 2016. All rights reserved. Contents may not be reproduced in whole or part without the written consent of the publishers. ISSN 2396-9016
CIOs in Europe, the Middle East and Africa (EMEA) are clearly engaging with the era of digital business, with 50 percent participating in a digital ecosystem* and 65 percent contacting startups to acquire key digital technology capabilities and skills, according to Gartner, Inc.’s annual CIO survey.´
"EMEA CIOs expect their enterprise IT budget to increase 1.4 percent, on average, in 2017 — the smallest increase of any of the world's regions — but, despite this, spending on digitalization in EMEA is on the rise," said Andy Rowsell-Jones, research vice president at Gartner. "EMEA CIOs are spending an average of 19 percent of their enterprise IT budget on digital initiatives, a figure set to increase to 29 percent in 2018. Their investments in digitalization include key technologies for traditional digital marketing and digital sales channels, advanced analytics, the Internet of Things, enhanced digital security solutions, and business algorithms and learning machines."
"Digitalization is maturing, and as it does, it's likely that organizations which are investing in digital business will become part of a digital ecosystem," said Mr. Rowsell-Jones. "Many will need to move away from a linear, value-chain-based business model, in which they trade with well-known partners and add value in stages, to become part of a faster and more dynamic networked digital ecosystem."
As digital business takes center stage for EMEA CIOs, it's not only their participation in a digital ecosystem that is growing. "They are also increasing the number of digital partners they work with, from an average of 63 today to an expected 127 in 2018," said Mr. Rowsell-Jones.
Ecosystems support business models that enable organizations to achieve performance superior to that of businesses that operate independently. "EMEA CIOs have clearly understood that by increasing the number of partners in your ecosystem, you extend your company's reach and deliver greater business value," he added.
Twenty-eight percent of EMEA CIO respondents identify digitalization as their No. 1 business priority for 2017. This percentage is higher than those of their counterparts in other regions; it compares with 11 percent for North American respondents, 21 percent for those in Latin America and 22 percent for those in Asia/Pacific.
"EMEA CIOs' interest in digitalization focuses on initiatives such as 'Industry 4.0', the European eGovernment Action Plan, the relative sophistication of financial technology in EMEA, and the extensive digitalization of solutions and services in industries such as transportation, logistics and retail," said Mr. Rowsell-Jones.
From a technology perspective, business intelligence (BI) and analytics remain the top investment priorities of EMEA CIOs. This is in line with their main business priority for 2017. "Analytics are crucial to customer, citizen and user engagement. They also underpin the mediation and value exchange that occurs in a digital ecosystem," added Mr. Rowsell-Jones.
Increased adoption of bimodal IT practices is of paramount importance for boosting digital performance. EMEA CIOs have made progress in this regard. In 2015, 39 percent of those surveyed had bimodal IT, and this figure has risen to 41 percent this year. Bimodal IT offers great benefits: for 64 percent of the respondents, it's bringing business and IT groups closer together, and for 52 percent it's improving the perception of IT.
For CIOs whose organizations participate in a broad digital ecosystem, the most common challenge is to find the right talent. "Twenty-six percent of the EMEA CIO respondents — a rise of nine percentage points from 2015 — consider finding the right skills and resources is the biggest barrier to successful execution of their job," said Mr. Rowsell-Jones.
The largest talent gaps remain in the areas of information/analytics and digital business. "Some organizations have shifted the focus of their business to software engineering, and one challenge facing them in particular is that planning horizons for finding, acquiring and retaining talent tend to be too short. With 59 percent of talent planning being for less than one year, companies that are feeling the effects of the talent shortage probably failed to plan far enough ahead," Mr. Rowsell-Jones.
CIOs should create a leadership action plan that's ready for a digital ecosystem. "It's most important to form get a clear understanding of your current leadership situation, to devise a realistic plan for improving it, and to select a few areas of focus in which to progress," concluded Mr. Rowsell-Jones.
Gartner, Inc. forecasts that IT spending in Europe, the Middle East and Africa (EMEA) will total $1.25 trillion in 2017, a 1.9 percent increase from 2016. IT spending across all the constituent regions of EMEA will be almost flat in 2016, increasing 0.6 percent year on year.
This week, Gartner analysts are discussing the next evolution of digital business and IT investments needed to drive digital to the core at Gartner Symposium/ITxpo in Barcelona.
Across all the countries of EMEA, spending on devices is expected to decline and to be the main contributor to an overall slowdown in IT spending in 2016. The segments that will contribute most to overall IT spending growth in 2017 are software and IT services (see Table 1).
"Spending on digitalization is on the rise in EMEA, and we’ll witness some leading organizations modernize their core IT systems and increase spend on software and services in particular, as part of their digital transformation," said John-David Lovelock, research vice president at Gartner.
Spending on data center systems, particularly servers, would normally grow when spending on software increases, but with the growing adoption of software as a service (SaaS) and other cloud offerings, data center spending will be more muted than usual.
Table 1. EMEA IT Spending Forecast (Millions of Constant U.S. Dollars)
2016 Spending | 2016 Growth (%) | 2017 Spending | 2017 Growth (%) | |
Data Center Systems | 58,163 | 1.6 | 58,953 | 1.4 |
Software | 112,244 | 6.0 | 119,842 | 6.8 |
Devices | 206,238 | -3.7 | 204,660 | -0.8 |
IT Services | 327,676 | 3.8 | 341,133 | 4.1 |
Communications Services | 526,782 | -0.9 | 530,397 | 0.7 |
Overall IT | 1,231,103 | 0.6 | 1,254,986 | 1.9 |
Source: Gartner (November 2016)
Brexit to Have Most Impact in Western Europe
The Brexit effect that is most pronounced is the decline in the pound sterling. Its decline has caused the prices within the U.K. for many IT products to increase in 2016.
"When the prices of goods increase, consumers and businesses shift their buying patterns, and the simple reaction is to buy less well-featured products," said Mr. Lovelock. "But now that there are viable cloud offerings in the U.K., organizations are also able to shift their spending into different areas — to buy computing as a service, instead of servers. These shifts will play out further in 2017. "Banks in France and Germany have increased their spending on software and consulting in 2016 to attract, or at least be ready for, any banking activity shifting away from London."
IT Spending in Western Europe Will Pick Up in 2017
In Western Europe, IT spending in constant U.S. dollars is forecast to total $803.5 billion in 2017, a 1.6 percent increase from 2016 (see Table 2). IT spending levels in Western Europe are likely to be essentially flat in 2016, with growth of 0.2 percent year over year.
Table 2. Western Europe IT Spending Forecast (Millions of Constant U.S. Dollars)
2016 Spending | 2016 Growth (%) | 2017 Spending | 2017 Growth (%) | |
Data Center Systems | 42,274 | 1.8 | 42,679 | 1.0 |
Software | 91,952 | 5.3 | 97,641 | 6.2 |
Devices | 113,701 | -5.5 | 110,899 | -2.5 |
IT Services | 292,265 | 3.8 | 304,281 | 4.1 |
Communications Services | 250,521 | -3.1 | 248,021 | -1.0 |
Overall IT | 790,712 | 0.2 | 803,521 | 1.6 |
Source: Gartner (November 2016)
Devices and Communications Services Spend to Fall in Western Europe
While IT services will remain the largest segment in terms of spending, and will continue to grow in 2017 (by 4.1 percent), the devices and communications services segments are likely to decline for at least the next three years.
"Mobile phone adoption is nearly at a saturation point — almost all users who want a new phone already have one," said Mr. Lovelock. "The mobile phone market has therefore shifted to a replacement cycle, and mobile phone prices have reached a plateau. This compounds the problems of communications service providers, who are having to compete more directly on price, by providing more services for the same amount and offering discounts on existing plans."
Gartner forecasts the PC market in Western Europe to total 47.8 million units in 2016 and to decrease by 3 percent in 2017. Gartner expects PC prices in the U.K. to increase by less than 10 percent in 2017 as vendors look to "de-feature" their PCs to keep prices down and take advantage of the single-digit decline in PC component costs in 2016.
Smartphone sales in Western Europe will total 147 million units in 2016, a 1 percent increase from last year. Gartner projects smartphone sales to increase 4.7 percent in 2017. Gartner analysts also expect that in 2017 more players (mainly from China) will aggressively target the "affordable" premium range, as well as improve basic smartphones, helping overall smartphone replacement volumes in 2017.
As with many new technology trends, certain assumptions and hype emerge that can influence buyer behavior and lead to poor decisions. Gartner, Inc. has identified seven of the most common flawed assumptions in the hyperconverged integrated systems (HCIS) market.
"HCIS, which encompasses software-centric architectures that integrate compute, storage and networking on commodity hardware, promises a cost-effective infrastructure solution that is simple to deploy, manage and scale," said George Weiss, vice president and distinguished analyst at Gartner. "However, new and emerging technologies are often surrounded by hype as vendors try to accelerate sales. Infrastructure and operations (I&O) leaders and decision makers should examine the following points carefully to avoid later disappointments or traps."
Advertising: Digitalisation World
While there are risks in engaging with vendors that lack a solid track record, the commodity pricing of parts and infrastructure will alleviate some of that risk. It will become increasingly easy to navigate a worst-case vendor failure.
According to International Data Corporation's (IDC) latest study, tablets are among the top IT spending priorities for 2017 in European enterprises, with sectors such as education, hospitality, government, and transport ranking them among their top 3 priorities.
Based on 1,500 interviews with IT and LOB decision makers in 10 vertical sectors across the U.K., France, Germany, and Sweden, IDC's Tablet in Enterprise 2.0 — The Large Opportunity study reveals that tablets are part and parcel of the mobility strategy of 60% of companies, with over two-thirds evaluating or planning to purchase slate or detachable devices in the short term, contributing to a significant increase in tablet penetration over the next three years.
The study confirms that tablet adoption continues to expand in terms of users and new usage scenarios. From highly mobile employees using detachable devices to perform productivity tasks no matter where they are, to customer-facing staff using tablets to take customers' orders or their esignatures, tablets have become a business computing device in their own right among European companies.
The study, which analyzes businesses' device strategies and requirements in terms of hardware and software, delves into the dynamics between PCs, tablets, smartphones, and phablets, as well as new technologies such as wearables, and evaluates the opportunities that evolving IT technologies offer to companies as they transform their workplace.
According to the survey's findings, European enterprises are maturing in their approach to tablet deployments and have started to focus more on applications and business outcomes, with deployments driven by the desire to increase employees' productivity, working flexibility, and collaboration, as well as by enterprises' digital and workspace transformation strategies.
"As mobility and digital transformation strategies drive the mobilization of specific industry applications," says Marta Fiorentini, research manager, IDC EMEA Personal Computing, "companies continue to look at tablets to harness the benefits of mobile data-enabled workers, support new digital business processes, and maximize engagement with always-connected customers. Hardware is still important but applications have become crucial and a key differentiator to achieve the business objectives behind tablet deployments."
The study also shows that although the rising popularity of touch input continues to support demand for slate tablets, purchase intentions among the companies surveyed show a strong interest in hybrids (which include detachables and convertibles) driven by the productivity and content creation needs of an increasingly mobile workforce. Hybrid devices are therefore anticipated to account for over half of the tablets that companies in the sample intend to purchase in the next few years, confirming the upward trajectory that IDC has observed in the market (see Consolidated Expansion of New Form Factors Predicted in the Second Half of 2016 and Beyond, says IDC).
According to a new forecast from the International Data Corporation (IDC) Worldwide Semiannual Services Tracker, IT Services and Business Services revenues are poised to pass the $1 trillion mark for the first time in 2018. Worldwide services spending totals for 2016 are expected to stay within the $900 billion range and by 2020 expected to near $1.1 trillion.
Worldwide Services Revenue by Category in Billions | ||
Services Category | 2016 | 2020 |
Business | $287 | $369 |
IT | $649 | $731 |
Grand Total | $936 | $1,100 |
Source: IDC Worldwide Semiannual Services Tracker 1H 2016 |
On a geographic basis, the United States is expected to generate the largest percentage of services spending throughout the forecast period, followed by Western Europe. The fastest growing regions will be Latin America, then the Middle East and Africa (MEA), which is poised to surpass Canada's services spending by 2017. Among the major global regions, Asia/Pacific (including Japan) is forecast to have the highest overall growth rate.
Global Regional Services Year-Over-Year Revenue Growth | ||
Global Region | 2016 | 2020 |
Americas | 3.2% | 4.2% |
Asia/Pacific | 6.8% | 5.4% |
EMEA | 1.7% | 3.1% |
Grand Total | 3.3% | 4.1% |
Source: IDC Worldwide Semiannual Services Tracker 1H 2016 |
With more than $100 million worth of spending each this year, the largest services markets will be Key Horizontal Business Process Outsourcing (BPO) and Systems Integration Services, which will also generate the largest revenue pools over the 2016-2020 forecast period. Business Consulting Services is forecast to outpace both markets in terms of growth.
Cloud-related services spending is expected to reach more than $98 billion by the end of 2016.
"An increased demand for digital solutions around 3rd Platform technologies will see cloud-related services spending nearly double by 2020 with continued considerable growth expected into the near future," said Lisa Nagamine, research manager with IDC's Worldwide Semiannual Services Tracker. "Project-oriented and outsourcing services produce the majority of spending in cloud-related IT services driven by a high demand for cloud platform adoption."
"As digital transformation involves significant changes to people and business processes as well as technology, most buyers will engage an outside services firm to help them along their transformation journey. As the demand for these services grows, often in double digits, service firms who invest in the right skills and assets around IDC's Third Platform of cloud, big data, social, and mobility, as well as Innovation Accelerators such as next generation security and IoT, will see their growth prospects shift away from legacy IT services, which are on the decline and towards the digital opportunity that lies ahead," said Rebecca Segal, group vice president, Worldwide Services.
Survey finds that resources bundled with management and security services make up nearly half of infrastructure and application services spending.
According to 451 Research’s Voice of the Enterprise: Hosting and Cloud Managed Services study, enterprises currently spend 28% of their total enterprise IT budgets on hosting and cloud services. Next year, that number is expected to climb to 34%, indicating a growing reliance on external sources of infrastructure, application, management and security services.
Although hosting and cloud providers frequently position themselves as primarily providers of infrastructure, 451 Research finds that just 31% of spending goes towards infrastructure services while nearly 70% of enterprise budgets for hosting and cloud is being spent on other services, specifically:
“The markets for unmanaged IaaS and SaaS are dominated by large, hyper-scale vendors. However, this spending trend indicates there is an appetite for the type of bundled services a broader market of managed service providers are well positioned to deliver. A strong opportunity exists for service providers offering a diversified set of hosting and cloud services that includes infrastructure and application hosting, as well as managed services and security services delivered around them,” says Liam Eagle, Research Manager at 451 Research and lead author of the Voice of the Enterprise: Hosting and Cloud Managed Services study.
The survey indicates that enterprises use hosting and cloud services supplied by a broad range of provider types. Public cloud infrastructure providers, which are used by 69% of respondents, are the most common, followed by managed hosting providers, used by 26% of enterprises.IaaS and SaaS usage is strong and these markets are dominated by small numbers of established leaders. “The market for managed infrastructure and application services is a longer tail market, with greater opportunities for providers who emphasise expertise in operating, optimising and securing the infrastructure and application products they deliver,” says Eagle. “This includes opportunities to deliver services based on reselling infrastructure and application services from the largest IaaS and SaaS vendors.”
The Voice of the Enterprise: Hosting and Cloud Managed Services study marks the first results for this new survey line within 451 Research’s Voice of the Enterprise service. It tracks the services enterprises buy as they move on-premises workloads into hosted and cloud infrastructure and application environments.The use of cloud computing gives businesses many benefits. The advantages span from greater forms of agility, to widespread cost savings. However, this is just the start of what can be achieved. Orchestrating the use of cloud technology unlocks even more value.
By Wayne Stallwood, AWS Practice Lead, KCOM.
With the speed that companies are deploying and introducing new solutions within organisations, the danger of poorly performing or badly designed cloud infrastructure is the last thing that a business wants to come up against. Cloud orchestration, in the form of Continuous Integration / Continuous Delivery (CI / CD), can be used to ensure organisations can rely on their infrastructure and mitigate risks around the clock before it’s too late. However, there seems to be an industry reluctance towards the widespread use of CI / CD.
Cloud orchestration doesn’t need to be a feared concept; it should be quite the opposite. There are many benefits that can be exploited from CI / CD capabilities, they just need to be uncovered.
Cloud platforms, such as AWS, fully support the principle of Infrastructure as Code (IaC) as part of a standard service offering and there is a wealth of 3rd party tools to extend or replace this. All infrastructure and resource configuration is possible through management consoles can also be applied as a series of templates. Combine this with a configuration management tool, such as Chef, and it’s possible to drive and manage the configuration of your entire cloud hosted service as if it were code. This enables a software engineering approach to be applied to infrastructure, and is a fundamental principle to producing and managing auditable, repeatable and scalable services within cloud environments.
The obvious first win is that this means the configuration management of the infrastructure can now exist within a common version control system such as Git. Complimenting this with a branching model, such as Gitflow, makes it possible to strike the perfect balance between agility and control with each change being moderated, tested and audited before it hits the production system.
With cloud orchestration in the form of CI / CD, it’s applying the concepts of a code delivery pipeline to the templates that manage the infrastructure in the same manner, or even within the same pipeline as the application code.
Within this model, changes to infrastructure are made in short iterative cycles with a chain of validations and tests to pass on the path to release. Many of these are automated with the pass/fail state signalling promotion to the next stage of the pipeline. However, as the release iterates through test environments and through to user acceptance testing or pre-production, it’s quite likely that some manual approval stages will be incorporated into the workflow. Normally these pipelines will be visualised on screens with dashboards showing the current state of the deployment. Often the update of a specific branch within the version control system will initiate this process automatically.
A relatively common misconception is that CI / CD stands for Continuous integration and Continuous Deployment, whereas the strict definition is Continuous Delivery. The subtle change in wording is important when considering market reluctance, as the principle of CI / CD does not necessary mean that every iterative change will be deployed to production environments. It just means that in principle it can.
This common misconception has led many businesses down a path of believing that cloud orchestration in the CI / CD pattern will result in frequently changing production environments. The fear that this will expose a business to unnecessary risks has overshadowed the benefits for the CI / CD approach, where businesses can in fact free themselves from the often-lengthy change management processes inherent in traditional enterprise scale services.
With any deployment of technology, changing established methods of working can always be a daunting challenge. In the world of physical hosting, change carries more risk and potentially trialling alternative solutions can be more expensive, so IT teams choose to avoid this at all costs. This classic mind-set needs to change as the fear will set companies back from development.
In conjunction with cloud-enabled deployment models, such as Blue/Green, CI / CD can significantly lower the risk of a failed deployment or unintended issues hitting production. The iterative nature of the changes mean that these changes should be better understood, and as a result they should have already passed as an individual change on the network through lower test environments and automated testing before being committed to live services.
This iterative approach allows any risk or fault to be more accurately attributed to a specific change. This is comparatively different to a more monolithic approach of environment-wide updates, where a combination of changes made within a longer development phase are bundled together in a release and applied in a single system or application-wide update.
Any organisation looking to move to cloud infrastructure with a compelling desire to improve their agility needs to consider a CI / CD approach to infrastructure deployment. Cloud solutions can massively compress the deployment time of an individual virtual machine instance or service. However if the whole process is wrapped up in a lengthy change control process to bring that service online, then the actual change to delivery pace will be very small.
A CI / CD approach does come with its own challenges; however, these can be easily tackled with the right foresight. The capacity of deployed resources has a more immediate effect on the running cost of the infrastructure, therefore it’s important to track how changes to either the application code or infrastructure affect the performance and capacity requirements of the cloud environment. If your CI / CD approach has a particularly high cadence, then it’s possible that performance bottlenecks could have been introduced several iterations back from when you identify them in a peak period.
A good Application Performance Monitoring (APM) tool can pay dividends here, and with careful integration it’s even possible for the CI / CD pipeline to publish release points straight onto the dashboards provided by the APM tool. This can make it clear which release triggered an increase in database query load, or changed the average transactions per second for example.
Another challenge is that although the software engineering principles may be well understood by developers, the repetitive nature of CI / CD means that readjustment of techniques is often needed. To ensure that teams are getting the most of the CI / CD approach, adequate training and support to interact and collaborate appropriately within the delivery pipeline is crucial.
We are already seeing closer integration between established orchestration tools and cloud services, and in many cases the cloud providers themselves are now providing native tooling such as Amazon’s Code Deploy and Code Pipeline. These native tools will continue to improve and develop within the market.
However, currently most of the automated test frameworks are focused on traditional application code. Over the next few years we will see more services and solutions (perhaps even from the cloud providers themselves) in this area, as it is far cheaper and faster to capture errors in the code before it is deployed.
With a horizon full of server-less architecture and wider utilisation of cloud native services heading our way, and we are just seeing the start of future possibilities now. The biggest wins in terms of cost and scalability are through using these cloud native services rather than standing up traditional application architecture on cloud hosted VM’s. Businesses need to push aside the fears they may have – the experts are out there to introduce the benefits of the cloud and the full potential is there to be exploited.
Prithvi
Shergil, CHRO at HCL Technologies, examines the benefits of
digitalisation and how HR teams can adopt technology to revolutionize talent
acquisition and employee experience.
In recent years HR as a function has had to change rapidly. With ‘jobs for life’ becoming a thing of the past, HR has evolved from performing a simple personnel function to being mission critical within a business. HR is now responsible for making sure that the need for talent is defined, the best talent is discovered, deployed, and most importantly, developed by an organisation. Finding and engaging talented employees at speed is a challenge, so increasingly HR is turning to reinvent the processes to engage with new and existing staff. This is particularly important as millennials become the dominant force in the workplace; with Generation Y are estimated to make up 75 per cent of the workforce by 2025.
Digitalisation is already a key strategy for organisations across the globe. For example, the UK Government aims to make the NHS ‘paperless’ by 2018, increasing transparency and efficiency of healthcare records and processes. Using new technologies as a way of working has become embedded in many company cultures. Internet based phone systems, cloud storage solutions and online collaborative working platforms are now embedded into modern office life. Organisations have also begun to utilise digital systems as a way of improving the relationship between themselves and their customers, exemplified by the implementation of digital systems in call centres to monitor caller responses. As technology becomes increasingly significant to businesses, both in terms of efficiency and customer satisfaction, it is important that HR professionals understand how to apply learnings from these experience and does not fall behind.
That being said, technology is already changing the employee life cycle from recruitment and training to employee engagement. The use of digital systems in recruitment is not new, as online job boards and candidate tracking systems have been around for the best part of two decades, but with scarce talent and a deluge of applicants for every role, analytical insight for filtering candidates have never been more useful. The gamification of recruitment processes has also served to increase predictability in converting candidates from offer to joiner, using games to engage them whilst assessing if they are the right fit for an organisation’s culture and work-style. Twitter has also emerged as a great resource for sharply identifying tech-savvy candidates. I discovered this first hand when we ran the world’s first Twitter recruitment campaign, #CoolestInterviewEver, engaging 250,000 prospective employees from over 60 countries. It also cost less than the average cost of recruiting one employee!.
However, it is not enough to just use technology in creative ways to source candidates, it needs to be embedded in company culture. Digital strategies can help your staff to progress and learn, they can also empower, employees and encourage retention. This is especially the case with millennials; a PwC study showed 52 per cent of them seek learning opportunities for themselves. Systems that host training resources and documents and alert relevant individuals and teams to participate in training accelerates individuals’ performance in a specific role. HR can use the professional networks created within organizations to credibly share with employees the value and relevance of courses that have to be completed, especially if they are a necessary step to progression.
Finally, digital can be used to engage existing employees through internal social collaboration platforms. Such platforms, such as our employee anchored social media platform MEME, encourages free flowing communication across roles or seniority and geographies, while remaining inherently more secure and private than public networks. Platforms like this are becoming much more relevant to use as companies have offices in many different markets and cities. Additionally, as more and more employees are working from home or remotely networks like MEME help these individuals still feel connected with their colleagues. Internal social networks mean everyone can be kept instantly up-to-date with all relevant news without ever needing to step inside the office.
It should be noted that while the surge of digital has clearly led to a range of opportunities for HR teams, there are a number of risks involved. For example, companies need to ensure they have high levels of data security to prevent employee’s personal information being compromised by hackers. However, there is also the broader issue of technology taking away the human touch which makes ‘Human’ Resources what it is. If organisations go too far in their digitalisation, it’s possible for colleagues to feel isolated from each other as screens get in the way of meaningful interactions. Issues such like this suggest that digital can be something of a double-edged sword, which could explain caution in the uptake by HR teams. Nevertheless, it is clear that digital can support virtually every element of a modern HR operation to support the entire employee lifecycle, and so all teams need to develop a digital programme or risk being left behind more forward thinking companies. Look out for my next blog on the employee lifecycle, coming soon.
Joanne Godfrey, Director of Communications and Strategy at AlgoSec looks at the problems caused by organizations’ mostly manual security processes, why automation can solve them, and drive organizational change.
Security management has gotten out of hand, according to our recent State of Automation in Security Report. 48% of survey respondents had an application outage as a result of a misconfigured security device, 42% experienced a network outage, while 20% suffered a security breach. And on average, these issues took up to three hours to fix, while 20% of organizations needed a day or more to fix the problem.
Security teams have to take back control; keep the bad guys out while keeping business applications running smoothly and securely, all day, every day. Yet currently these skilled (and usually highly paid) security staff are spending their precious time mostly ‘keeping the lights on’ – manually maintaining existing systems, sifting through countless security alerts, making device configuration changes – while often inadvertently causing outages and creating security holes.
So it’s no surprise that over 83% of our survey’s respondents believe the use of automation in security needs to increase, and over 80% believe automation will enhance the overall security posture of their organizations.
Today, however, just 15% of survey respondents said their security processes were highly automated, over 52% had some, but not enough, automation, and 33% said they had little to no automation.
So what are the biggest inhibitors to increasing automation in security operations? Overall the survey found that the top three barriers were concerns about accuracy and false positives; difficulties in driving organisational change; and lack of time to implement automation solutions.
The survey, however, uncovered a marked difference in the perceptions of C-level executives, and their IT and security teams – the ones on the front line doing the actual work. Differing from the overall trend, C-level execs harboured concerns and misconceptions that there was a lack of the suitable automation tools, and that implementing these types of tools would cause business disruption.
These differing perceptions show that the senior executives probably aren’t informed enough about the automation solutions available to help to address the security problems their organization’s face; while front-line security staff are too concerned about potential errors and distractions from their day-to-day work to put forward a strong case for automation.
The good news is that, at the very least, the survey’s findings showed that all respondents agreed that automating security processes would deliver far-reaching benefits that go beyond security on its own. 75% believe automation will improve application availability by reducing outages; and 75% believe it will eliminate mistakes that create access points for hackers. Three-quarters of respondents believe it will reduce errors and help process security policy changes faster, and the same percentage believe it will reduce audit preparation time and improve compliance.
But, if the full benefits of automating security process are to be realized, there clearly needs to be better communication between those handling the day-to-day IT network and security operations and their senior management - to get everyone on the same page about the value, benefits and capabilities as well as the limitations of automation. Once that happens, automation should be driven from the top in order to alleviate concerns surrounding accuracy, organizational processes and business disruption.
With enterprise networks continually evolving, thanks in part to business transformation initiatives such as cloud and SDN, with cyber threats are becoming ever more sophisticated, and with businesses becoming increasingly subject to demanding compliance standards it’s clear that automation of security processes is no longer a nice to have it’s a necessity in order manage security at the speed of business.
With technology becoming more and more adept at meeting businesses’ strategic goals at ever faster speeds, the great deal of digital disruption we are experiencing was always going to be inevitable. Younger, more agile businesses with digital at their core now have an abundance of tools available to them, with which they can gain ground on enterprises, necessitating that these enterprises react with greater speed to innovate and stay ahead. In this high pressure environment of running fast to stand still, frequent Proof of Concepts (PoC) have become a fact of life for those looking to get ahead.
By Sean Jackson, CMO at EXASOL.
A PoC is used to demonstrate the application of ideas into real world practice. In business terms, they are used to prove that there is a solution to a given problem, or there is an opportunity that can be exploited. While they are by no means essential for every technology decision within an organisation – in large part because of the cost and time investment required – conducting them for fundamental or large scale changes to how a company operates opens up the possibility to reap significant returns.
PoCs can be invaluable in providing information that can mitigate the chance of failure in a future project, while also being useful when refining the scope of a project, enabling a more well-rounded understanding of what is practical, rather than jumping in blindly and adapting on the fly. Ultimately PoCs can give businesses the confidence needed to progress.
This article will explore the best ways to approach PoCs without risking disruption or security breaches, and explain how to test the technology you are looking to implement.
A Proof of Concept is a means of bringing a vendor’s product into your business to make sure it works in your environment, functions in the way it is being sold to you, and works natively within your infrastructure. It is an incredibly important step in purchasing and implementing new technology. You wouldn’t buy a new car without a test drive. You need to try before you buy, in your own environment.
Running a PoC can be as difficult or as easy as you make it, but it needs to be managed well to be effective in determining the future technology that meets your needs.
Before starting a PoC, you need to have clear objectives and key questions determined ahead of time. Its success depends on having these things tied down. Along the process, these objectives and questions you are looking to get answers on need to be constantly referred back to. Don’t be hoodwinked into getting something to work for the sake of it. Moreover, more political capital is staked on the success of it, as the costs begin to mount up. IT directors need to guard against these dangers by putting in place the right governance structure with strong project management, to keep the team focused on the right questions, in the right timeframe.
So how best to make it a success? The Proof of Concept should take a relatively short space of time and require only a modest investment. It should represent a fair test of the system but not an exhaustive one. It should be enough to prove the system can work and answer key questions about how to design and deploy it, as well as how to configure it to make it work for the business. Above all, it should attempt to show whether the product can deliver all that it is expected to.
The PoC should be built in a self-contained environment that is kept separate from the production environment. Using existing production services will most likely add more complexity and potentially increase the risk to live services. Moreover, room to manoeuvre would be constrained by change control, which goes against the idea of a Proof of Concept.
On your own kit versus the cloud:
Many businesses opt for running a Proof of Concept on premise, using existing hardware or buying in hardware for the test. However, teams often have to wait for hardware resources to become available, as well as for IT support to setup or configure the environments for the PoC process. To keep costs down, or to save time, many are turning to the cloud.
The cloud is a good solution for many setups due to being able to spin up a number of servers for the relatively short period of testing. The cloud can be set up as a self-contained environment outside of existing infrastructure without affecting production servers. However, it relies on a good internet connection and if a large amount of data needs to be loaded into a cloud solution, this can present difficulties. Additionally, it requires cloud administration skills, which may be lacking in some organisations.
A Third Way
A third way is for the vendor themselves to provide the hardware to test out their solution for the duration of the project. This is nothing new, the concept of wheeling in large server appliances on trolleys, or equivalent, for the purposes of testing has been around for decades. However, with modern technology, it doesn’t need to require such a mass of hardware.
Our customer and partner, Atheon Analytics, runs its Proof of Concepts using the EXASOL database on a tiny Intel NUC, a server so small that it fits in a pocket, yet is still capable of running large analytical queries. Atheon Analytics works with some of the UK’s largest retailers and suppliers and has used the device to run PoCs at client locations.
The perfect PoC
To find the best solutions for your business, there is no doubt you need to try things out. There may be no such thing as a perfect PoC, but as we have seen, there are plenty of options for successfully running a PoC.
Stuart Hardy writes in his article for South African IT magazine IT Web that MPLS is dead because users are accessing their business applications from some kind of cloud infrastructure, or from a SaaS platform. He predicts that software-defined WANs are going to change this scenario. In the article it is also claimed that if you can use a pair of DSL lines, which are increasing in speed from 50 to 100 mb/s, then you could use 4G or 5G connectivity. The argument for this approach is that will drive down the cost of your WAN and allow you to do more with your branch and remote offices. But will it improve cost efficiencies? As you’ll discover in this article, the answer is that it may, or it may not.
By Graham Jarvis, PR and editorial consultant.
A spokesperson for Tim Naramore, CTO of Masergy says it’s Tim’s view that software-defined Wide Area Network (SD-WAN) technologies aren’t simple solutions, and “despite the growing interest there are a bunch of downsides to the technology – namely that it's expensive and invariably companies end up being locked in to one single vendor and there aren't any interoperability standards.”
The spokesperson then adds: “As I’m sure you know, people want to use broadband internet to connect branch offices and to help take advantage of connecting up offices globally, but the internet has inherent latencies so packet loss is common, and yet most SD-WAN solutions can rectify this problem to allow companies to take advantage of the broadband internet.” For this reason SW-WAN is said to be increasingly popular at a point when companies are adopting more cloud services and SaaS applications, in spite of many firms’ previous cautiousness about cloud technologies.
In this article Hardy talks about the status quo of networking being affected by cloud computing, which he claims has been a game-changer. David Trossell, CEO and CTO of data acceleration company Bridgeworks, says it never ceases to amuse him “that when a new technology comes along there is a prediction that all the preceding technologies are dead, and we have heard it so many times with the paperless office in spite of the fact that we now consume more paper than ever.”
He adds that there is a view that tape for data back-up and storage is dead: “Yet even though many now use other technologies, tape is still the most cost effective, secure, green way to store large amounts of data, and the amount of capacity sold each year is not declining at all.” In essence, this means that the soothsayers’ view of the imminent or future death of a technology has to be taken with a pinch of salt. He says there are nevertheless some technologies that change everything, and he agrees that the cloud is certainly one of them.
“Whilst the cloud had a slow start, the momentum behind it is now growing and the larger enterprises can no longer ignore it because it changes the way we provision and consume computing power, the way we approach the storage of data, the way we develop and deliver applications, and it offers near infinite scalability”, he explains. One thing hasn’t changed though in his opinion; the need to move ever increasing amounts of data, but the cloud has changed how we access, analyse and consume the data.
In the past there were few choices, comprising of such things as leased lines. These were expensive and they suffered from latency and packet loss. Trossell says broadband is also affected. It comes with unpredictably large latencies, packet loss, congestion, or MPLS which has been the technology of choice for many organisation for quite a while because it is cheaper than having dedicated links – especially, claims Trossell, when you have multiple sites but with lower latency and reduced packet loss. In fact there are many applications residing in the cloud that use MPLS. The problem is that in a cloud dominated world MPLS is, as Hardy claims, “too complex expensive and inflexible.” Organisations are therefore looking around for other solutions that can meet their needs.
Trossell then asks: “So where does this leave us?” He says that many organisations are looking to SD-WANs as a possible solution. He points out that Hardy believes that the MPLS internet cloud problem is resolved by using SD-WAN solutions from providers that have or subcontract virtual private networks (VPNs) with local points of presence around the world. In this case Trossell says the client only needs to be concerned with the local connection to the POP.
“However, many of the latency mitigating techniques used by these providers to provide the illusion of increasing performance consist of local caches and dedupe technology in the local POP”, he reveals. The trouble is that many organisations are now discovering that this doesn’t play well with the changing requirements of data applications, data types and data volumes. There needs to be some form of WAN optimisation.
“Many cloud and internet applications are now aware of the need to provide some form of WAN optimisation and include compression of dedupe functions within the product, but this nullifies any further possible performance gains by using these traditional techniques at the POP”, claims Trossell. These techniques are also rendered ineffective by the increasing use of images and video data as part of many applications. They are troublesome because these data streams are pre-compressed and they are ever growing in size. Therefore traditional data acceleration techniques aren’t working in the POP, and the problems are compounded by the sheer volume of the data sets they produce and by the low bandwidths used to send and receive them over a WAN.
“We find that the SD-WAN providers support for traditional data flows is now inadequate for today’s high speed high volume transfers, and the SD-WAN world is much larger than these providers”, says Trossell who asks why there are so many companies turning to SD-WAN and turning away from MPLS. In response he argues that “SD-WANs have the ability to bring into play multiple differing transports and this involves combining them under one virtual WAN to address different WAN requirements”.
He also suggests that this can involve a mixture of leased lines, MPLS, broadband and LTE 4G and 5G connections. For example, he suggests that “voice-over-IP (VoIP) applications may need to use the MPLS or leased lines, while internet access may use the broadband connectivity with the 4G acting as capacity on demand or as a failover capability.”
However, there is no time to sit comfortably because many organisations will still have to put up with limited network performance – just like with traditional WAN technologies. In an attempt to tackle this issue, the most common knee-jerk reaction is to add further bandwidth. “In most cases this will result in little or no improvement in performance and the one thing SD-WANs do not solve is the problem of latency and its friend packet loss – both of which have a devastating effect on performance”, explains Trossell. He adds that “despite what you hear MPLS still suffers from latency and packet loss up to 2%.” So there is a need for a new approach to optimise bandwidth.
“As we have discussed before the traditional methods of optimising bandwidth using compression and deduplication no longer work on modern data sets and tend to limit themselves to throughputs of 1Gb/s and below, and so you have to employ other technologies to resolve these issues”, Trossell comments. Here, he offers his top tips for mitigating latency in order to change the status quo:
With these considerations in mind, it is arguable that SD-WANs won’t necessarily change the status quo of networking because they won’t and can’t address on their own the network performance issues that are affecting many organisations today. In fact, at a Financial Services Club event on 6th September 2016 at RISE London, about the challenges facing blockchain technologies, it was very clear that not many people quite understood today’s data authentication, data volume and networking challenges. Nevertheless the speaker, Leanne Kemp, Founder and CEO of Everledger, did in his Trossell’s opinion. So it’s important to avoid falling into the trap that suggests that SD-WANs will solve everything. Hardy writes a very good article, but they don’t necessarily offer reduced latency and zero packet loss.
The retail sector is a particularly tough sector to be in during the past seven years. The recession and ensuing slow down hit the high street hard as shoppers reigned in their spending. Looking to reduce operational costs fashion retailer, Anya Hindmarch, decided to review its IT infrastructure looking to make savings through improvements in performance and efficiency rather than removal of services. Long-standing IT partner Conosco recommended the retailer migrate to a hybrid cloud infrastructure in order to achieve its goals.
Anya Hindmarch is a fashion design company and a global brand that produces luxury handbags and accessories. It has over 50 stores worldwide, including flagship stores on Sloane Street and Bond Street in London, Madison Avenue in New York and Aoyama in Tokyo. There is also a dedicated Anya Hindmarch bespoke store (personalises products for customers) in London. In total it has a staff of 165.
Over nearly 30 years of business, much of Anya Hindmarch’s expansion has been through small shops and concessions using IT to deliver connectivity via a small number of point-of-sale (PoS) devices and a customer relationship system. To do this required reliable internet connectivity, but with many locations offering limited internet service, and the size of the outlets dictating the amount of budgets available for implementation, it needed to be creative in with any solution.
Dan Orteu, operations director for Anya Hindmarch takes up the story. “With the recession in full swing we needed to review all aspects of our business to identify where we could make savings and increase efficiency. IT was one area we felt could deliver improvements and we identified four key challenges.”
The challenges were identified as:
Over the years, Anya Hindmarch has developed a real strategic partnership with IT supplier Conosco, a relationship now in its 14th year. Anya Hindmarch has a slightly unusual relationship with Conosco in that it has its own highly capable IT director. Conosco works very closely with Orteu to add their expertise to the mix, supporting Orteu’s vision with IT services to remove the distractions of day-to-day IT management. Knowing the retailer’s business inside out, in 2015 and as part of the continuing assessments of its services, Conosco recommended Anya Hindmarch migrate to a hybrid cloud infrastructure.
The proposed hybrid cloud solution would allow Conosco to deliver ‘Templated IT’ for stores, intelligent cost control and tight operational integration using on-site staff. The implementation involved all production servers to be migrated to the VMWare vCloud Air platform while retaining on-premises servers to provide print, image deployment and Windows domain services.
“Conosco’s recommendations gave us the ability to address the performance needs of the thriving business, eliminate the traditional risks associate with having on-premises core server infrastructure and create a consistent, global, multi-national platform including the new head office plus 20 other stores, all Japanese based,” said Orteu.
As part of the vCloud project, Conosco also undertook some ground-breaking work, provisioning a ‘first-of-its-kind’ interconnect between vCloud Air platform and Anya Hindmarsh's MPLS network, creating a direct, secure connection between all sites and the hosted infrastructure without routing traffic over the public internet.
Additionally, all servers on the vCloud Air platform have backups which have been replicated to Amazon Web Services (AWS). This is done through a new StorageCraft ShadowProtect (backup) server on the vCloud Air platform. So in the unlikely event that something were to go wrong with the VCloud Air platform or the backup server on the platform, Anya Hindmarch’s data can be restored from another cloud provider.
As a result of the migration to vCloud Air, Anya Hindmarch has seen noticeable improvements in day-to-day performance, all of which impact the company’s efficiency and productivity, and ultimately, the bottom line. And because servers are now running on SSD storage, access speeds have been increased, allowing for a more productive work environment.
Orteu said, “We are delighted how smoothly the implementation went and I have found the hybrid cloud allows for easier migration of workloads back and forth between public and private clouds, combining the flexibility and efficiency of public cloud with the security and control of the on-premise data centre. We are now able to store large volumes of unstructured data in a way that is scalable, accessible (with instant, self-service access), durable, secure and affordable.”
Anya Hindmarch has found the implementation has delivered the important business benefits and lists them as:
With the implementation considered a success, and maintaining its ‘flexible’ relationship, Conosco supplies an account manager who now works from Anya’s head office one day a week, immersed in daily operations to provide on-site IT consulting. His focus is adapting Anya Hindmarch’s core IT systems to support the ever evolving and growing business demands. In addition, he co-ordinates third party technical providers, including voice communications, data connections, CCTV, in-store music systems and point of sales systems across the globe.
A delighted Dan Orteu said, "As Operations and IT Director at Anya Hindmarch, I've worked closely with the team at Conosco for over 10 years. As the business continues to grow rapidly both in the UK and abroad, it has become ever more important that we have a truly scalable and robust IT infrastructure. The migration of all our Head Office servers to vCloud has made this possible. It has allowed us to streamline internal processes and improve file sharing and communications across our global estate of stores, warehouses and head offices. It has also reduced the costs of our off-site backups and allowed us to put in place very effective business continuity and disaster recovery plans."
Those managing enterprise networks often face significant pressure from the wider business, particularly within large organisations and data centres that are determined to improve the user experience and increase operational efficiency.
By Paul Arts, Head of Marketing, Networking, Dell EMC EMEA.
Legacy networks are often the root of common issues when managing networking operations. These challenges include increased security concerns as outdated architectures and programming increase the risk of attack, while failing performance due to ageing hardware creates limitations in terms of management options. Historically, the lack of innovative new technologies available on the market has also led to businesses continuing with sub-optimal network performance. This has resulted in an environment ripe for disruption from open standards based networking, where hardware and software are decoupled and the use of open source software provides additional flexibility. Whilst many businesses are exploring these options and beginning to address their networking issues, the adoption of software defined networking (SDN) and virtualization to enable smarter, faster networks is still very slow.
The initial signs of disruption within the enterprise networking sector are now well established, as businesses are visibly switching to automated processing, but this may not be enough. While larger organisations are leading the way, many smaller enterprises are struggling to implement these new technologies, with the cost of available networking solutions regularly cited as one of the key barriers.
In addition, according to IDC and Dell’s recent Networks That Deliver Change whitepaper, management in smaller companies tends to regard technology expenditures simply as a cost centre and often fails to identify the strategic, enabling role of the IT department. This has a pronounced effect on the adoption of new technologies, with up to 34 percent* of business leaders waiting for solutions to be proven in the market before embarking upon an implementation of their own. As a result – despite virtualization having firmly established itself in mainstream business applications – the networking side can be seen to be lagging behind core operations.
This is not the only problem however, as IDC has identified a potentially greater issue surrounding the lack of understanding and experience in implementing and managing SDN within enterprise environments. The study revealed that of the responding organisations, only a fifth* had practical experience with SDN and of those, only one percent actually use the platform extensively. This reflects part of a broader industry issue, where organisations lack enough skilled and certified staff to manage modern network infrastructures, creating additional barriers to adopting new technologies. What this means is that businesses are in considerable danger of missing out on the strategic benefits offered by emerging networking solutions. This can, in turn, prevent the realisation of significant increases to the bottom line performance and therefore exposes a potential weakness in today’s competitive marketplace.
There is still hope, as many businesses can compensate for having fewer qualified virtualization engineers by sharing this limited skill set across each division of its IT team, spanning the server, storage and networking teams. The data from the study indicated that a significant proportion of businesses keep their networking teams separate from their server and storage teams – where a considerable amount of the existing experience around virtualization implementation and management lies. By bringing the networking and wider operations teams together, it’s possible to increase the overall expertise and management capability of each department, compensating for the lack of certification specifically contained within the networking team. Whilst the businesses themselves can resolve this by moving away from a “siloed” approach, there remains an additional stumbling block relating to the usage of fragmented management tools and independent operational practices, as outlined in the IDC whitepaper.
According to the data only three percent of participants indicated that they used integrated end-to-end service management suites for IT infrastructure, DC networks and campus networks. This hinders the realisation of automation within networking management and creates additional obstacles for network engineers and IT professionals.
This is where vendors must begin to take responsibility. Only by working alongside partners and customers can they bolster the capabilities of enterprise networks. By providing advanced services and encouraging integrated systems management, vendors can accelerate the adoption of advanced networking technologies, allowing for increased automation and the establishment of future ready networking infrastructure.
The merging of the organic and digital worlds is undoubtedly gathering
pace. As humans, we ourselves are now huge consumers of technology, which has led
to patience being a virtue for very few.
By Steve Watts, Co-Founder, SecurEnvoy.
However, this patience is being tested on a daily basis. Just one example is the login process we have all become accustomed to. The average amount of characters that a staffer needs to tap in prior to gaining access to the business applications they want is often as many as 30, especially in worse case scenarios where they are forced to use an email address as a login which can bring the count to more than 40.
Not only can legacy authentication methods such as the above – which rely squarely on login and password entry – be circumvented by the login information being stolen and using it to impersonate the user, but the time they are taking is causing frustration for the user. So, what can forward thinking, agile businesses do to keep up with this paradigm shift in expectations and keep their staff happy, whilst their systems secure?
One of the areas we are seeing increasingly is biometric authentication. Until recently, most mainstream examples of biometric recognition as an authentication method have been based on fingerprint, palm, iris, facial or voice recognition. However, biometrics is now being taken a step further, with some financial institutions trialling using a person’s walking gait to identify them and offer them relevant services as they walk into the branch.
The organic and? digital worlds have been further ineradicably linked since the launch of the Apple Watch, which turns itself off when you remove it from your wrist due to it not being able to read your heartbeat. There has also been several trials recently with NFC enabled chips being implanted under the skin so that a simple swipe of the hand can be used for anything from contactless payments to checking into an airport lounge[1].
In the consumer realm and buoyed by the success of the likes of Apple Pay, contactless payments have become increasingly popular. In fact, research from the UK Cards Association suggests that consumers are now making over 60 contactless transactions a second[2]. To help facilitate this trend, all of the 4,000 point of sale (POS) terminals at the upcoming Rio Olympics have been made contactless-enabled; letting fans wipe, tap, dip or click to pay at all of the venues and fan parks[3].
Businesses are now looking to leverage this contactless payment user experience into the business realm through the use of technology such as near field communication (NFC). The next logical progression is surely to use the action familiar now to many of touching one device with another device (think of your tablet or laptop) to qualify and pay for your shopping order instantly.
The bring your own device (BYOD) trend of recent years has become indelibly implanted into the modern office environment. Because of this, it is important that businesses are not only able to protect the various end points but enable them to become a productive part of the technological ecosystem. Businesses want to be able to say to their staff that they can choose what device they want to use confident in the knowledge it won’t compromise security.
However, because of the propensity for staff to change their device of choice every couple of years, it is important that lifecycle management is built into any security strategy (i.e. the easy set up and deletion of files and apps).
Surely, it is now time to give staff the Apple Pay experience in the business realm, giving staff enterprise authentication via two factor authentication (2FA). 2FA requires not only a password and username but an additional component which could be something that the user knows, something that the user possesses or something that is inseparable from the user. It is now incredibly popular and has become the system favoured by seven of the ten largest social networking sites[4] (including Facebook, Twitter and LinkedIn) as their authentication method of choice.
Amazon has recently taken the ‘something that is inseparable from the user’ component of 2FA a step further and has filed a patent to use photos or videos of a user's face as a way to approve their online purchases, meaning the Seattle-based e-commerce giant could soon let you purchase products by taking a selfie.
Whatever the additional component may be, the delivery mechanism for the authentication of tomorrow is almost certainly going to be the mobile device found in all our pockets. Now, with 2FA technology such as ours, push notifications that pop up on your phone (like we are all used to with social media apps such as Facebook) and then push authentication back (i.e. automatically send back the authentication code) are being used. Further, even if you are offline, the system will know that after a few seconds of not receiving the request from the device, it will need to text a six digit code that the user can type in to the login screen of the app to prove they are who you say they are.
With 2FA combined with NFC connectivity, authentication can be reduced down to just a couple of taps, not the thirty or more required by legacy authentication methods disliked by employees. Easing user frustration whilst boosting security credentials.
2 http://www.fstech.co.uk/fst/UKCA_Card_Payments_February_2016.php
4 http://www.ebizmba.com/articles/social-networking-websites
James Munson, the director of digital services and technology at the Driver Vehicle Standards Agency, looks at how the DVSA has modernised the MOT service, changed how it works with suppliers and outsourcing agreements, uses agile software development, and the important role of technical service desk operated by BJSS.
The DVSA, which focuses on improving road safety in Britain by setting and enforcing the standards for vehicles, driving and motorcycling, is transforming its technology landscape from primarily outsourced contracts to in-sourced agile delivered services using a combination of vendors and employees. As part of this, the agency has deployed a technology service desk. Known as the Technical Support Service, it supports the DVSA’s new MOT software application currently in use by thousands of MOT testers around the UK delivering around 150,000 MOT tests daily. Here James Munson, Director of Digital Services and Technology for DVSA, explains the modernisation journey and why it was vital to support the agency’s strategic direction.
Our previous MOT solution was a fully outsourced, 10-year contract, which started in 2005 and included the mainframe backend, all software and dedicated garage hardware provided by the supplier. Flexibility was limited – it was still a dial-up service in 2015 with only one garage refresh of hardware over 10 years, and no mobile support. As these types of legacy contracts come to an end, DVSA is sourcing in a different way: on shorter contracts for differing elements of our technology, with running, improvement and control of the service remaining with the agency.
In the period up to the end of the outsource contract the agency used an agile model to develop the new digital service – starting with a minimum viable product release and then delivering constant, regular improvements to the software based on feedback from users. With regular reviews and approval, we have been able to introduce more flexibility and a quicker pace of change to the software we provide to MOT testers. With 80,000 users and almost 23,000 garages using the service, it was important we also gave them the freedom to access the software from whatever device or computer suited them best.
The service handles 42 million MOT tests and supports over a billion pounds of garage transactions annually. After an independent assessment of cloud technology providers, Amazon Web Services (AWS) was selected as the right technology partner for the MOT service. The service was the first national government service to be hosted at scale on AWS. The production environment was built in 10-weeks and ‘cut-over’ in a single weekend. It was higher risk than I would have liked, but necessary to ensure that the service was on the right path for the next phase of the journey.
Our outsource model has been replaced by shorter, more flexible contracts with multiple suppliers. We’ve utilised the government digital marketplace to access vendors. We needed a cohesive single team to support our technology projects and collaboration was initially a challenge, but the multi-vendor, blended delivery team is now working together effectively.
Working with variable scope, that agile needs, means we need to measure vendor performance in different ways, such as points burn, defects generated and quality reviews of code. This enables us to deliver maximum value without the constraint of fixed scope delivery.
As the new MOT service was launched, we also launched the service desk to support the application and its users. This Technical Support Service [TSS] is operated by BJSS, which is a delivery-focused IT consultancy with over 20 years' software delivery and IT advisory experience in private and public sectors. BJSS has been instrumental in driving dramatic cost and efficiency improvements for the UK Government.
At its peak, the TSS consisted of 75 people, although now it has a core group of 18. We also have internal approval for over 20 new people to join our digital team, but recruitment of the right people is a challenge, so the contribution of BJSS remains important.
The TSS was set up in a similar fashion to the MOT software application – starting with a minimum viable product and the service desk has continued to evolve from there. We now use the TSS for two core functions. Firstly, to provide incident and change management for MOT testers. The service, which is second line technical support and sits behind the business service desk, handles around 300 support tickets per month.
Second, the TSS manages the production environment ensuring software updates are fit for purpose and will be successful. The TSS is a key stakeholder in the process and has the authority to challenge a release if it does not meet the required criteria, taking over after the development team has delivered a release candidate - acceptance testing and performance testing. New software updates are typically released on a Wednesday night – there have been over 180 releases since September 2015 and downtime has been kept to out-of-hours. We are developing our blue-green deployment approach that will move us to deployment without downtime in the coming months.
The agile delivery process means users are seeing improvements to the application much more regularly than they did with the old solution. We are measuring the velocity of development sprints and value throughput to end users. We have the backlog of change planned out for several months ahead, but with flexibility to flex the backlog as needed by the service management team.
In the future, we’ll be scaling the TSS. Eventually other applications provided by DVSA – such as the system to support goods vehicle operator licensing or commercial vehicle testing – will be supported by the TSS as well.
Government’s aim has been to move towards more agile, short-term, less rigid contracts and I think the MOT modernisation programme has been a great example of this working successfully. In a recent Government Digital Service peer review they concluded that this MOT is one of the best agile projects they have seen in flight in government to date, and can be seen as an exemplar of the benefits of working collaboratively in an agile way.
Our journey continues as we modernise our other services supporting the agency’s strategic journey. We have now started a similar modernisation journey for the driver and rider services technology and commercial vehicle testing technology which have both also been outsourced for many years.
Trying to feed the world is no easy feat. Springg IT-platform, is doing its part by making it possible to provide fertilizer advice to farmers in the most remote regions of the globe.
Springg (part of DutchSprouts) is a software company that is dedicated to helping farmers produce more crops. But how do you feed a world with a growing number of people in it? Dutch Sprouts is helping to increase the worldwide yield target to be able to feed everyone on earth in 2050, because they know that giving fertilizer advice to farmers in rural areas can increase their yield target up to 500% in some cases.
Farmers used to have to wait for weeks, even months for soil testing results to be returned. Only about 20 million farmers are able to get to laboratories to get tests, and there are 500 million farmers that need access to this information.
“In 2013, we started DutchSprouts, which Springg is part off, to help all farmers have access to laboratories as the western world has,” says Wouter Kerkhof, CEO.
Wouter continues, “You can make the biggest change in countries outside Europe and not in the western world, but in places like the middle of Africa. We worked with the Bill and Melinda Gates foundation to start with Kenya, and then go out to the rest of Africa,” says Wouter.
Dutch Sprouts’ SoilCares started by literally driving the sensors out to the areas of need. “We had a lab with sensors that we placed in the back end of trucks. Farmers bring soil samples to these trucks, we do the soil analysis, and they have fertilizer reports back within two hours,” said Wouter.
The mobile soil testing labs are very expensive – the trucks complete with the sensors total around $100,000 each – still, that sum is far less than the millions of dollars a single physical laboratory would cost. However, the trucks will never be able to reach the hundreds of millions of farmers that need this information. To make this data more accessible, Dutch Sprouts works on a handheld sensor, about 10x10 cm size, with a target to sell them at a low price. This smartphone-enabled SoilCares scanner allows farmers to take their own soil samples, then connect with the SoilCares app – and just two clicks later, they get the calibration of nutrients in the soil. For all testing, each farmer pays per measurement, or per click with the handheld devices. “We want to keep this technology as affordable to farmers as possible,” says Wouther.
In 2013, Springg was ready to develop a scalable architecture. Says Wouter: “We were looking for a fast and reliable solution that could grow to handle all the data we had now and expected in future.”
A partner had extolled the virtues of ESB platforms, so that was the direction in which the Springg team was looking. Initially, Springg interviewed other companies to inquire about their architectures. One of those companies in the Netherlands was a Talend customer, so Wouter and his team dug a little deeper into Talend. “We trusted their software, it looked understandable, I’m not a technical guy myself, and we got a strong feeling it would work out with Talend.”
The Talend team presented a proof of concept, and Springg liked what it saw, and from there, they engaged Talend to begin the build out. Springg noted that among the deciding factors were Talend’s large open source support community, and the subscription-based pricing that was straightforward and able to handle Springg’s anticipated fast growth and still keep the costs in control.
“We selected Talend and started creating the first services and we kept on building and building. It’s one of the most important items in the architecture, everything goes through the Talend system,” says Wouter.
Contact centre fraud costs businesses hundreds of millions of pounds every year, but the growing adoption of voice biometrics looks set to put an end to that. Tom Harwood, Co-Founder and Chief Product Officer at Aeriandi discusses how criminals are increasingly finding their own voices being used against them.
Financial fraud has always been big business for criminals, but the unstoppable rise of internet and telephone shopping, combined with the growing emphasis on 24/7 customer service means there have never been so many channels through which they can pursue it. From January to June 2016, one incident of financial fraud took place every 15 seconds, according to Financial Fraud Action. That is a staggering statistic, which unfortunately reflects a 53% increase on the same period in 2015.
Thankfully the industry is fighting back. Online security has made significant strides recently, with new technology such as multi-factor authentication and behavioural monitoring making it significantly harder for criminals to make off with sensitive information. However, when one door shuts, criminals typically look for another one to open and unfortunately the telephone offers just that at the moment.
When most of us call a contact centre to make a payment or query our account information, the only security steps taken to verify our identity by the telephone agent is a few simple questions such as name, address and date of birth. If answered correctly, we are considered verified and given free reign over our. Unfortunately, if a fraudster can answer those same questions correctly then they too will get the same access. In the age of social media, many of the answers to these security questions are freely available online, making it all too simple for criminals to get the information they need.
All of this is bad news for contact centres. It is estimated that up to 50% of all financial fraud incidents today are initiated by a phone call, making telephone agents particularly vulnerable. With the vast majority of agents lacking the training or experience to spot potentially fraudulent callers, the situation is growing increasingly serious. Because of this, businesses are starting to look at innovative new technology-based solutions that can take the human element out of the equation completely, and voice biometrics does just that.
Research shows that whilst the volume of telephone fraud incidents is on the rise, the number of actual perpetrators remains relatively static. In fact, it is estimated that as much as 95% of all fraudulent call activity worldwide is conducted by the same group of professional criminals. This has allowed authorities to build up a global database of known fraudsters and their voice signatures over time.
An effective voice biometrics solution is able to compare every caller into a contact centre against this global database, enabling known fraudsters to be swiftly identified. The best telephone fraudsters are extremely adept at social engineering and manipulation, but with a biometrics solution in place, these factors are taken out of the equation completely. The agent is subtly notified as soon as a voice match is confirmed and the caller can then be quarantined from any sensitive information. The entire biometric process is also completely transparent, meaning callers will not even know it is taking place unless an issue is raised.
Voice biometrics can also be deployed as part of a wider security solution, alongside other measures such as intelligent fraud detection, for even greater protection. Intelligent fraud detection scores every incoming call against a series of key risk factors such as audio characteristics, geo-location and phone number reputation to create an overall risk score. The agent receives an on-screen notification displaying the call’s score, along with custom instructions for how to further authenticate the call as necessary. This complements voice biometrics very well by providing additional protection against fraudulent callers who may not be part of the global voice database yet.
Telephone security in many businesses still lags dangerously behind internet security, but the introduction of innovative new solutions such as voice biometrics is helping to redress the balance. One of the most powerful weapons in a fraudster’s arsenal is the ability to hide behind anonymity and manipulate telephone agents using social engineering. Voice biometrics eliminates this from the equation, providing significantly higher levels of protection to agents and ensuring that criminals have nowhere to hide.
The days when all business needs could be met with a single app are behind us. Enterprise mobility has undergone a huge transformation, and IT departments must evolve with it as the focus continues to shift, and new application landscapes emerge.
By Florian Bienvenu, Senior Vice President, EMEA Enterprise Sales, BlackBerry.
As these landscapes continue to grow, there are several key changes to explore. For example, legacy mobile solutions were primarily built to fit the needs of IT, and one-size-fits-all apps were the norm. This must change, as modern mobility is about much more than just the user experience and how employees interact with these applications. Instead of asking how to consolidate everything into one app, we must bring solutions together to encourage productivity.
The way employees use mobile devices has also changed. People are moving towards broader and more collaborative workflows; they want their email and mobile messaging to work together, ensuring their documents shared via email are accessible across multiple devices.
Another big change is employee workflows. These now incorporate multiple mobile apps, and it’s rare for any of these to be from the same source. Without integration, this leads to unnecessary complexity and a disjointed user experience. Enterprises have components potentially coming from multiple vendors, ISVs, and internal developers, and the modern enterprise demands a very different application architecture.
It’s also important to remember that, just five years ago, a large mobile deployment might have consisted of a few thousand users or devices. Today, as enterprises embrace mobility, we’re starting to see deployments in the tens and hundreds of thousands. In the past, enterprises would attempt to shoehorn mobility into apps that simply weren’t designed for it. The process is a lot smoother now, which helps greatly with increasingly large deployments.
In the quest to capitalise on mobility, enterprises cannot forget about ensuring data security and end-user privacy. Anyone who wishes to develop enterprise applications must prioritise security, as failure to do so means that the app fails. This not only puts companies at risk, but also their customers.
There are several skills that are needed to address application-level enterprise security and user privacy needs. However, they are tough to come by, and differ from those required to build engaging mobile apps. Additionally, device-focused solutions such as mobile device management (MDM), an initial choice for companies looking to manage enterprise mobility, are not architected for users outside enterprise boundaries. If you’re not on the corporate directory, your device and its data won’t be under corporate control.
So, how can you scale mobility beyond the traditional business to an enterprise audience? A platform-based approach is one option, providing businesses with the flexibility to deploy a range of technologies for mobility management in the enterprise and beyond. This approach offers several key advantages over traditional application development:
By making these capabilities available as services via widely accepted interfaces (e.g., RESTful APIs), businesses enable developers to quickly create enterprise-ready applications. Known as mobile back-end as a service (MBaaS), this approach is essential to platform-based development. It abstracts out complex server-side programming, allowing developers to instead dedicate their time to innovation, front-end development, and a better overall experience.
Most importantly, it allows companies to realise the bottom-line contribution of securing, connecting and mobilising their workforce without having to sacrifice control, and security, of their data in the process.
Data breaches and cybersecurity threats are some of the biggest roadblocks to modern businesses, and the need for embedded security increases as mobile end points store more applications and critical data. An EMM solution designed from the ground up with security in mind is the most effective means of providing comprehensive risk management, without getting in the way of end users.
The Internet of Things has also emerged as one of the most significant developments of our time, connecting people and ‘things’ on a whole new level. Behind the scenes however, it is transforming product development, manufacturing, supply chain management, logistics, marketing, sales, analytics and of course, the increasingly mobile workforce. This is where the Enterprise of Things steps in.
The Enterprise of Things enables businesses to take full advantage of workforce mobility, and move beyond short-term management solutions to adopt a comprehensive Enterprise Mobility Management (EMM) solution among others. These are designed to specifically meet today’s challenges, and whatever lies around the corner, enabling modern businesses to manage increasingly complex mobility environments, with a mix of mobile end points, operating systems, risk profiles and ownership models, including corporate owned, personally enabled (COPE) and bring your own device (BYOD).
Anywhere, anyplace collaboration and communications capabilities can also be complex and costly, as well as laden with security risks. Sharing, syncing and protecting corporate content from outside the firewall requires an EMM solution with comprehensive and sophisticated MCM capabilities.
A streamlined user experience is also crucial to the future of business mobility, and a modern-day EMM solution automates management tasks for both end users and administrators. Integration with enterprise directories, for example, enables IT to automate the distribution of apps to both groups and individuals.
Industry experts believe that mobile end points will become increasingly attractive targets for cyber criminals. Organisations that fail to safeguard their digital assets face severe consequences, including operational disruptions, financial losses, lawsuits, regulatory fines and irreparable damage to reputation and competitive standing.
The provocative side of IT resilience is made up of all the threats that today have more and more people talking about it than ever before.
By Lilac Schoenbeck, VP Marketing, iland.
Let’s face it, who isn’t intrigued (and worried, terrified or simply concerned) by headlines about IT failures that have triggered nationwide flight cancellations or data hostage situations or global service outages? That said, even less dramatic outages like ING Bank and Glasgow City Council still hit the headlines here in the UK recently. ING Bank’s regional data centre went offline due to a fire drill gone wrong (reports suggest that more than one million customers were affected by the downtime), and Glasgow City Council lost its email for three days after a fire system blew in the Council’s data centre.
The stark reality is business threats are only becoming more pervasive, and whether that is because of ever-evolving cyber-attacks, or human errors being made by overworked teams, or aging hardware and systems that just keep breaking down… the list is constantly growing and goes on and on.
In the face of these rising threats, there have been two recent and very fundamental shifts in the way C-Suite and IT executives think about Business Continuity:
The first one is that IT resilience is no longer doomed to the proverbial backburner and only something that the IT team needs to concern itself with. This is especially true as the “powers that be” realise cloud-based backup and disaster recovery are extremely cost and resource-efficient. Near-zero downtimes are standard with the right technology and support. Implementation time can be measured in hours meaning that the path to securing executive buy-in is well paved and far easier than it might have been in the past.
That said, at a roundtable that we hosted last month in London with senior IT executives to discuss the findings from our latest survey, The State of IT Disaster Recovery Amongst UK Businesses, the group did debate whether business decision makers really understood the financial impact of downtime. Moreover, whether more education is needed about recovery times and what can be recovered, with the group concluding that clearer prioritisation around different systems needs to be implemented so the business understands what will happen when outages take place.
The second fundamental shift is that security is now a top priority for Disaster-Recovery-as-a-Service (DRaaS) and backup. When choosing a DRaaS or Backup-as-a-Service (BaaS) provider, companies are rightfully now asking more questions about security first, right along with speed of recovery and all the other questions that you might expect get asked. Here at iland, with our advanced security cloud platform as our answer, we couldn’t be happier that this is now the case.
We are thrilled to be helping more companies ensure IT resilience is faster than ever before. In fact, these shifts in executive and business thinking have contributed to our 248% growth in DRaaS and BaaS in the first half of this year, and we’ve doubled the number of new customers we’re now working with.
To my mind this comes down to a few key factors:
As I said at the outset, IT resilience is becoming much more of a business and mainstream issue and with IT resilience now firmly on the agenda for most organisations we are excited to be able to put our customers’ worries to rest. At the same time, we hope they are equally excited about getting some sleep and having their worries taken care off. If you would like more information about our DraaS and BaaS capabilities, please do get in touch.
It is, I’d argue, relatively easy to see the big picture future. The world we live in increasingly revolves around data. Connected cars and the Internet of Things, financial firms analysing petabytes of trading data to find the next opportunity, oil and gas companies using topological and geographical data to understand the viability and profitability of new wells. A few of the many data use cases.
By Michael King, Marketing Communicatons Manager , DDN.
I’d argue DDN is in an enviable position, because our business is data. Well, data storage. High performance data storage. In fact, in almost every case, the highest performance data storage – the most scalable storage arrays for sure. This position allows us the benefit of seeing the highest capacity workflows and workloads, and to develop the technologies to meet those demands. It gives us insight too, into where the enterprise is going and what the storage requirements of today and tomorrow might be.
The future is undoubtedly data driven and in the business world, taking advantage of the data to bring about improvements to the company that owns the data – whether that is to increase profitability, improve customer service, or to try and cure illness and disease is the order of the day.
For me though, there is one technology that is likely to change the enterprise storage game the most in the coming years.
Object storage isn’t new, far from it. We’ve been providing our Web Object Scaler (WOS) technology for a number of years, but enterprises are only now starting to realise the benefits of object storage technology – there is now a place for it in the enterprise.
Depending on which analyst firm you talk to, you will hear the storage growth predictions that vary between a factor of 30 and 40 for the next few decades. That means we will be storing 30-40 times as much digital data ten years from now, compared to today.
Object store tackles or addresses the challenges of efficiently storing massive volumes of unstructured data. Examples of large scale object stores are from the likes of Google, Facebook, and Twitter that have deployed object storage to meet the requirements of their fast-growing applications and user base: billions of users are storing trillions of objects in infrastructures that were designed to scale infinitely and perform with the lowest latency.
And that is one of the key differentiators to object storage as compared to block - or file-based storage – it scales out extremely well. Hundreds of TB can be added in a matter of minutes to expand your available storage and there are already real world examples of volumes holding objects numbering into the trillions. Further, it can be seamlessly combined with file-based storage such that the user thinks they are using file storage but in actuality most of the data is being stored in objects.
The use cases for enterprise are today limited to long-term data storage, geo-distributed collaboration, online content distribution and archiving for the most part. But with object storage now being a mature solution, and with more professionals realising the massive benefits, there certainly is more in store for object storage.