By Philip Alsop
Maybe not exactly what those waiting at Gatwick the other week, to take to the skies for their holidays, could be heard shouting, but, nevertheless, a refreshing reminder that, no matter what the level of automation and artificial intelligence, good old human error will never be eliminated. Ah, but surely if it had been a gang of robots working away at the airport, they would have been programmed to recognise what cables to cut through (ideally, none!) and what to leave alone?! Maybe, but that pre-supposes that the person or persons doing the programming would be capable of imagining every single work-related scenario and coming up with the appropriate response for the robots to make.
Okay, so we might end up in a world where robots programme robots, but even then, there will always be fresh, unthought-of situations that are outside the knowledge and experience of both humans and robots. In which case, would you rather trust automation, however intelligent, humans able to think outside the box, or a mixture of the two, working in tandem?
Not sure there’s an easy answer to that question. However, for what it’s worth, my money is on (yet another) hybrid world. A world where robots lead and humans follow/support, sometimes; and where humans lead and robots follow at others. AI, ML, IoT, RPA, VR and more are here to stay, but they just might not eliminate the need for humans any time soon. After all, Gatwick had to rely on humans, plus pens, plus whiteboards, to solve its IT failure!
Lack of digital skill sets is the greatest barrier to transformation.
Infosys has released global research, The New Champions of Digital Disruption: Incumbent Organizations, that reveals that under a quarter of organizations surveyed, understand that commitment to digital is at the heart of true transformation. And, it is these organizations that are reaping rewards from digital disruption. According to the research, more than half of all respondents surveyed, rank focus on digital skillset as the most important factor in successful transformation, followed by senior leadership commitment and change management, implying the need for a conducive organizational culture.
Visionaries, Watchers and Explorers
The research identifies three clusters of respondents based on the business objectives behind their digital transformation initiatives.
True transformation begins from the core
While Watchers and Explorers are primarily focusing on emerging technologies like Artificial Intelligence (AI), Blockchain and 3D printing for digital transformation initiatives, Visionaries are not only looking at emerging technologies, but are also focusing strongly on core areas such as mainframe and ERP modernization.
Visionaries believe that true transformation comes from the core and without this in the background, digital technologies will not perform to their potential. The study reflects that their commitment to modernizing from the core will yield benefits, such as improved productivity and efficiencies.
Agility in championing digital disruption
Visionaries watch and explore futuristic trends which currently escape the notice of the other two cohorts. They boast of increased clarity on opportunities and threats of digital disruption over Explorer and Watchers, as well as an increased ability to execute on them.
Visionaries look further into the future. They attach a higher rating to the impact of market drivers such as Emerging Technologies (86 percent Visionaries vs 63 percent Explorers, 50 percent Watchers) and Changing Ecosystems (63 percent vs 39 percent, 31 percent) – enabling them to be agile and disruptive.
Lack of digital skill set – greatest barrier
When ranking barriers on the path to digitization, building digital skill sets was found to be the most prevalent (54 percent) challenge for organizations, highlighting the lack of digital skill set available.
Transforming from a low risk organization to an organization that rewards experimentation (43 percent) and lack of change management (43 percent) were the second and third greatest barriers, showcasing the turbulence and resistance to change associated with digital transformation.
The importance of establishing an ecosystem
Building in-house capabilities was on the list of 76 percent of Visionaries, who were keen on acquiring digital native firms, to quickly gain the digital skills that 71 percent of the Visionaries believed were lacking in-house. Thereby, showcasing the increasing trend towards acquisitions and development of a sustainable ecosystem. Comparatively, the proportion of Explorers and Watchers looking at the acquisition and ecosystem options was negligible.
Pravin Rao, Chief Operating Officer, Infosys, said, “Navigating the digital disruption requires companies to drive a holistic approach to transformation and foster a digital culture that brings together leadership commitment and a renewed approach to skill building. Infosys with its long standing partnerships with global corporations is focused on accelerating their digital transformation journey from their core systems while building new capability to drive competitive advantage.”
Overcoming barriers to digital transformation
Enterprises are relying on their transformation partners to help them scale barriers. Preparing workforce for digital transformation and developing strong capability in managing large organizational change have emerged as top strategies to overcome these barriers. This is especially critical to Visionaries who are aiming to transform business culture.
Jul 16th, 2018
In Digital Business,
A new survey into digital effectiveness has revealed that ‘Product Thinking’ is helping organisations become more digitally effective.
The survey carried out by digital agency Code Computerlove, a fore-runner of product thinking as an approach, found some clear differences in the attitude, behaviours and methodologies of top-performing business versus the mainstream when it comes to their digital products.
Key findings included:
Top performing business are more likely to be looking to ‘increase customer lifetime value’ than their mainstream counterparts (68% vs. 47%). For successful businesses, the numbers reporting a ‘focus on outcomes in delivery’ (75%) and a ‘focus on long-term goals versus short term targets’ (59%) were almost double that of underperforming businesses reporting the same behaviour. 82% of high performers agree strongly with the statement “Focusing on user needs leads to better business outcomes”.
The research also found that 60% of top performers are satisfied with their ability to deliver digital products on time and on budget. Conversely, only 19% of the mainstream felt the same.
50% of top performing companies (vs. 38% mainstream) are currently aiming to speed up their performance and 69% adopt agile processes in development.
Commenting on the survey, Tony Foggett, CEO of Code Computerlove, said: “Product Thinking has emerged as a way in which organisations are navigating the need to deliver increased consumer centricity and agility using larger more complex, and business critical, technology platforms.
“It’s a way of achieving digital effectiveness as part of a digital transformation process, but it’s not just a question of adopting new digital tools and platforms, it’s the ways of thinking and organising teams around these tools and across an organisation as a whole.
“Taken together the aims and approaches of the top performing businesses that completed the survey reflect ‘product thinking’ - a mindset based on continually building value for the customer through an agile, non-siloed, digitally-savvy culture with clearly defined business goals.”
In its simplest form, Product Thinking is a drive for business effectiveness delivered by maximising value from digital touchpoints. It enables organisations to prioritise their efforts more effectively and immediately correlate investment with commercial return.
Foggett added: “It was interesting to see from the survey that top performing businesses had a stronger focus on value. 68 per cent can attribute return on investment to specific digital projects; three-quarters focus on outcomes in development and are happy to change requirements if needed (during the development process) and over half focus more on long-term goals rather than just short-term targets.”
Foggett concluded: “We’re undoubtedly in a period of change and the pace of this change means companies cannot afford to develop products and projects in the traditional ways. The structures and ways of thinking that dominated businesses in the late 20th century are not suited to the demands of the 21st.
“Our report has shown that businesses that have moved on from ‘big bang’ releases, and are taking a more long term, agile and customer-value approach – akin to Product Thinking, believe that this approach is paying dividends.”
Nearly 90 percent of organizations investing in AI, very few succeeding.
A survey commissioned by the leader in unified analytics and founded by the original creators of Apache Spark™, Databricks, has revealed that only one in three AI projects are succeeding, and, perhaps more importantly, it is taking businesses more than six months to go from concept to production. Organizations are hindered at multiple stages of the process when bringing AI projects to production. According to 95 percent of European respondents, collaboration between data engineering and data science teams is a challenging issue. Nearly all respondents cite data-related challenges when moving AI projects to production. IT executives point to unified analytics as a solution for these challenges with 90 percent of respondents saying the approach of unifying data science and data engineering across the machine learning lifecycle will conquer the AI dilemma.
The research, commissioned by Databricks through CIO/IDG Research Services, shows that more 93 percent of organizations surveyed are investing in technology to help with data prep and data exploration/modeling, including data processing, data streaming, machine learning and/or deep learning tools. As a result, organizations are using an average of seven different tools for data prep and modeling. Based on these results, it is not a surprise that European organizations cited technology as their most common obstacle when moving AI projects to production.
Additional results of the survey speak to the complexity and organizational confusion being creating as companies pursue AI initiatives: The average number of AI projects considered completely successful within enterprises is around 39 percent according to those surveyed in Europe, compared to an average of 35 percent of projects in the US. 98 percent of the surveyed believe preparation and aggregation of large datasets in a timely fashion is a major challenge; 96 percent of respondents found data exploration and iterative model training challenging; 90 percent cited the deployment of models to production quickly and reliably as a significant challenge
David Wyatt, Vice President and General Manager EMEA, Databricks, commented on the survey’s results: “Getting AI right is challenging, and one of the biggest hurdles to success is how teams collaborate around data. The research shows how difficult and time consuming it can be to turn raw data into valuable insights for the business. With unified analytics, enterprises can bring together their people, processes and technology to deliver results faster – not only does this make projects more efficient, it increases the chances that these projects can succeed in meeting their objectives over time.”
So, what will help these organizations conquer the AI dilemma? The surveyed executives said they need end-to-end solutions that combine data processing with machine learning capabilities. These streamlined solutions would simplify workflows, improve efficiency and ultimately accelerate business value.
In fact, nearly 80 percent of executives surveyed said they highly valued the notion of a unified analytics platform. Unified analytics makes AI more achievable for enterprise organizations by unifying data processing and AI technologies. Unified analytics solutions provide collaboration capabilities for data scientists and data engineers to work effectively across the entire AI development-to-production lifecycle. With more than 90 percent of large companies facing data-related challenges and increasing complexity driven by an explosion of machine learning tools, the need for platforms and processes that can remove technology and organizational silos is more pronounced than ever. Unified analytics provides an ideal approach for companies facing modern AI implementation barriers.
Databricks accelerates innovation by unifying data science, engineering, and business. Through a fully managed, cloud-based service built by the original creators of Apache Spark, the Databricks Unified Analytics Platform lowers the barrier for enterprises to innovate with AI and accelerates their innovation.
Databricks commissioned IDG Research to conduct analysis of the AI, machine learning and deep learning landscape in large enterprises. The survey looked to sample the experiences of those working in specific senior data engineering and data science roles at companies with more than 1,000 employees. The audience of 200 people was split equally between the United States and Western Europe.
SnapLogic has released “The 2018 Data Value Report,” a new study that reveals enterprises expect to generate a 547% return on their data investments, increasing revenue by an average $5.2 million as a result of using data more effectively. However, businesses have only scratched the surface in realizing data’s potential: On average, organizations are using only half (51%) the data they collect or generate, and data drives less than half (48%) of decisions.
The new study polled 500 enterprise IT decision-makers (ITDMs) to examine their data priorities, investment plans, and what’s holding them back from maximizing value. Conducted by independent research firm Vanson Bourne, the survey reveals that enterprises plan to invest an average $1.7 million in operationalizing data over the next five years – more than double what they are spending today. Yet enterprises are far from achieving their data-driven ambitions. Despite the clear business case for data investments, enterprises struggle to reap the rewards, held back by manual work, outdated technology, and lack of trust in data quality.
The new report provides a benchmark as enterprises develop strategies for bringing data-driven decision-making to all parts of their organizations. Key findings include:
“There’s a saying that every business must be a software business, but what they should really focus on is becoming a data company,” said Gaurav Dhillon, CEO at SnapLogic. “Businesses understand that dedicating time, money, and talent to data will lead to long-term revenue gains, yet in reality most enterprises are still far from generating significant value and ROI. Legacy systems, tedious manual labor, and the sheer volume of information are preventing organizations from maximizing their data-driven potential. The enterprises that act now to spread data literacy throughout their business will be the ones to thrive.”
A recent article by Steve Gillaspy of Intel outlined many of the challenges faced by those responsible for designing, operating, and sustaining the IT and physical support infrastructure found in today's data centers. This paper targets four of the five macro trends discussed by Gillaspy, how they influence the decision making processes of data center managers, and the role that power infrastructure plays in mitigating the effects of the following trends.
Jul 18th, 2018
In Device Security, Identity/Access Management, Datacentre & Network security, Compliance/Governance/Risk Management, Security-as-a-Service,
Despite 95 percent of CIOs expecting cyberthreats to increase over the next three years, only 65 percent of their organizations currently have a cybersecurity expert, according to a survey from Gartner, Inc. The survey also reveals that skills challenges continue to plague organizations that undergo digitalization, with digital security staffing shortages considered a top inhibitor to innovation.
Gartner's 2018 CIO Agenda Survey gathered data from 3,160 CIO respondents in 98 countries and across major industries, representing approximately $13 trillion in revenue/public sector budgets and $277 billion in IT spending.
The survey indicates that cybersecurity remains a source of deep concern for organizations. Many cybercriminals not only operate in ways that organizations struggle to anticipate, but also demonstrate a readiness to adapt to changing environments, according to Rob McMillan, research director at Gartner.
"In a twisted way, many cybercriminals are digital pioneers, finding ways to leverage big data and web-scale techniques to stage attacks and steal data," said Mr. McMillan. "CIOs can't protect their organizations from everything, so they need to create a sustainable set of controls that balances their need to protect their business with their need to run it."
Thirty-five percent of survey respondents indicate that their organization has already invested in and deployed some aspect of digital security, while an additional 36 percent are actively experimenting or planning to implement in the short term. Gartner predicts that 60 percent of security budgets will be in support of detection and response capabilities by 2020.
"Taking a risk-based approach is imperative to set a target level of cybersecurity readiness," Mr. McMillan said. "Raising budgets alone doesn't create an improved risk posture. Security investments must be prioritized by business outcomes to ensure the right amount is spent on the right things."
Business growth introduces new attack vectors
According to the survey, many CIOs consider growth and market share as the top-ranked business priority for 2018. Growth often means more diverse supplier networks; different ways of working, funding models and patterns of technology investing; as well as different products, services and channels to support.
"The bad news is that cybersecurity threats will affect more enterprises in more diverse ways that are difficult to anticipate," Mr. McMillan said. "While the expectation of a more dangerous environment is hardly news to the informed CIO, these growth factors will introduce new attack vectors and new risks that they're not accustomed to addressing."
Continue to build bench strength
The survey revealed that 93 percent of CIOs at top-performing organizations say that digital business has enabled them to lead IT organizations that are adaptable and open to change. To the benefit of many security practices, this cultural openness broadens the organization's attitude toward new recruitment and training avenues.
"Cybersecurity is faced with a well-documented skills shortage, which is considered a top inhibitor to innovation," Mr. McMillan said. "Finding talented, driven people to handle the organization's cybersecurity responsibilities is an endless function."
According to Gartner, while most organizations have a role dedicated to cybersecurity expertise, and therefore appreciate its needs, the cybersecurity skills shortage continues. Gartner recommends that chief information security officers (CISOs) continue to build bench strength through innovative approaches to developing the security team's capabilities.
Jul 24th, 2018
In Infrastructure, Networking,
Global report finds 73% of consumers would move to a competitor if a website is slow to load.
Over 80% of consumers are more frustrated by consistently slow websites than those that are temporarily down. That’s according to a global online study from Eggplant, the customer experience optimization specialists.
Eggplant polled a combined total of 3,200 UK and US adults on attitudes to website speed and performance, and found that a business with a slow or underperforming website is likely to lose 73% of its customers to a competitor.
While outages are a problem for businesses around the world, the survey reveals that a slow website is much more damaging than one that is temporarily down. To stay competitive and retain customers, retailers must focus on website speed alongside website availability.
UK findings
81% of Brits find a slow website more frustrating to use than one that is down or not working. In fact, 7 in 10 (70%) of UK adults rate website speed as important when it comes to online activity. Of this 81%, a third (33%) said website speed was very important, while only 17% said speed was not important at all.
The online research also found that three quarters (75%) of Brits would be likely to use a competitor website if the one they were using was slow. This is especially important for brands who commoditize based entirely on price such as tickets, hotel and travel sites.
When it comes to UK consumers, site speed is so important that almost 3 in 5 (60%) feel much more negative to a brand if its site is consistently slow to load. This is in contrast to less than a quarter (23%) who feel the same way if a site is down or not working. However, nearly half (49%) of consumers feel slightly negative towards a brand if its website is not working.
US Findings
Across the board, US consumers have the same sentiment to website speed vs. downtime. Only slightly down on the UK, 79% of Americans find a slow running website more frustrating to use than one that is down or not working. In fact, 41% of American adults rate website speed as very important when it comes to online activity. Like the UK, only a tiny proportion (1%) said that website speed was not important at all.
Americans would be fractionally less likely than those in the UK to move to a competitor if a site was slow, with 69% stating they would move (compared to 75% in the UK). Alongside this, 24% of US consumers stated they would eat less than half a donut before giving up on a website and moving to another.
When it comes to American consumers, site speed is so important that well over half (59%) feel much more negative to a brand if its site is consistently slow to load. This is in contrast to less than a quarter (23%) who feel the same way if a site was temporarily down or not working.
Responsive, fast websites are a crucial part of business success worldwide, or organizations risk losing customers to competitors. When it comes to preparing for the peak retail period, website speed is even more important than availability, and businesses need to be ready.
According to IDC's new Worldwide Semiannual Blockchain Spending Guide, Europe will be the second-largest investor in blockchain technologies. With a compound annual growth rate (CAGR) of 80.2% for 2017–2022, Europe will increase its spending from around $400 million in 2018 to $3.5 billion in 2022, helping it to close the gap with the U.S., the biggest blockchain investor.
2017 was a significant year for blockchain in Europe, with companies asking themselves how blockchain solutions can help simplify, improve, and secure their businesses. A recent IDC survey across Europe, however, revealed there is still some way to go in terms of understanding blockchain applicability and usefulness, especially among smaller European companies.
"The European market is less flexible than other regions, and is also more fragmented in terms of business size," said Carla La Croce, senior research analyst, Customer Insights and Analysis, IDC. "Nevertheless, as IDC has already highlighted, 2018 is still the year of blockchain, and European companies are showing increasing interest, supported by growing investments. Companies recognize the importance of the technology and are starting to explore how it can be deployed in their business, going beyond pilots and identifying the best use cases."
According to Mohamed Hefny, systems and infrastructure solutions program manager at IDC CEMA, "Blockchain offers a huge opportunity for start-ups and in the emerging markets of the region where government support and advanced skills offer a fertile ground for things to really happen. The technology is about rapid progress and agility — and the tech giants' size and legacy are not an advantage here."
The largest and fastest-growing industry for blockchain is the financial sector, with projected spending of $173 million this year (accounting for 42% of the total). Insurance and banking are also expected to grow above the average. Other fast-growing markets are supply-chain-related segments such as manufacturing and retail, at 82.7% and 82.5% CAGR respectively. Though the biggest industries are traditionally more inclined to invest in blockchain, sectors such as utilities, professional services, and government are also expected to see strong growth. These sectors will use blockchain for transactions or to track goods and assets, with supply chain quality and provenance control among the key uses of blockchain across all regions. By 2022, IDC believes the top use cases will be trade finance and post-trade/transaction settlements, identity management, regulatory compliance, cross-border payments and settlements, and asset/goods management.
Growth will be driven by IT services, with the highest share devoted to project services and IT consulting. Services will account for more than two-thirds of growth in 2022, slightly increasing over time at the expense of software and hardware, with the latter representing only a very small share of the total. Software technologies will account for slightly less than a third in 2018, and this will decrease to a quarter in 2022. Software spending growth will be driven by security software.
IDC's Worldwide Semiannual Blockchain Spending Guide quantifies the emerging blockchain market by providing spending data for 10 technologies across 19 industries and 16 use cases in nine geographic regions. IDC defines blockchain as a digital, distributed ledger of transactions or records. The ledger, which stores the information or data, exists across multiple participants in a peer-to-peer network. There is no single, central repository that stores the ledger. Distributed ledger technology (DLT) allows new transactions to be added to an existing chain of transactions using a secure, digital or cryptographic signature. Spending associated with various cryptocurrencies that utilize blockchain and distributed ledgers technology, such as Bitcoin, is not included in the spending guide. Unlike any other research in the industry, the comprehensive spending guide was designed to help IT decision makers to clearly understand the industry-specific scope and direction of blockchain spending today and over the next five years.
As organizations continue to embrace digital transformation, they are finding that digital business is not as simple as buying the latest technology — it requires significant changes to culture and systems. A recent Gartner, Inc. survey found that only a small number of organizations have been able to successfully scale their digital initiatives beyond the experimentation and piloting stages.
"The reality is that digital business demands different skills, working practices, organizational models and even cultures," said Marcus Blosch, research vice president at Gartner. "To change an organization designed for a structured, ordered, process-oriented world to one designed for ecosystems, adaptation, learning and experimentation is hard. Some organizations will navigate that change, and others that can't change will become outdated and be replaced."
Gartner has identified six barriers that CIOs must overcome to transform their organization into a digital business.
Barrier No. 1: A Change-Resisting Culture
Digital innovation can be successful only in a culture of collaboration. People have to be able to work across boundaries and explore new ideas. In reality, most organizations are stuck in a culture of change-resistant silos and hierarchies.
"Culture is organizational 'dark matter' — you can't see it, but its effects are obvious," said Mr. Blosch. "The challenge is that many organizations have developed a culture of hierarchy and clear boundaries between areas of responsibilities. Digital innovation requires the opposite: collaborative cross-functional and self-directed teams that are not afraid of uncertain outcomes."
CIOs aiming to establish a digital culture should start small: Define a digital mindset, assemble a digital innovation team, and shield it from the rest of the organization to let the new culture develop. Connections between the digital innovation and core teams can then be used to scale new ideas and spread the culture.
Barrier No. 2: Limited Sharing and Collaboration
The lack of willingness to share and collaborate is a challenge not only at the ecosystem level but also inside the organization. Issues of ownership and control of processes, information and systems make people reluctant to share their knowledge. Digital innovation with its collaborative cross-functional teams is often very different from what employees are used to with regards to functions and hierarchies — resistance is inevitable.
"It's not necessary to have everyone on board in the early stages. Try to find areas where interests overlap, and create a starting point. Build a first version, test the idea and use the success story to gain the momentum needed for the next step," said Mr. Blosch.
Barrier No. 3: The Business Isn't Ready
Many business leaders are caught up in the hype around digital business. But when the CIO or CDO wants to start the transformation process, it turns out that the business doesn't have the skills or resources needed.
"CIOs should address the digital readiness of the organization to get an understanding of both business and IT readiness," Mr. Blosch advised. "Then, focus on the early adopters with the willingness and openness to change and leverage digital. But keep in mind that digital may just not be relevant to certain parts of the organization."
Barrier No. 4: The Talent Gap
Most organizations follow a traditional pattern — organized into functions such as IT, sales and supply chain and largely focused on operations. Change can be slow in this kind of environment.
Digital innovation requires an organization to adopt a different approach. People, processes and technology blend to create new business models and services. Employees need new skills focused on innovation, change and creativity along with the new technologies themselves, such as artificial intelligence (AI) and the Internet of Things (IoT).
"There are two approaches to breach the talent gap — upskill and bimodal," said Mr. Blosch. "In smaller or more innovative organizations, it is possible to redefine individuals' roles to include more skills and competencies needed to support digital. In other organizations, using a bimodal approach makes sense by creating a separate group to handle innovation with the requisite skill set."
Barrier No. 5: The Current Practices Don't Support the Talent
Having the right talent is essential, and having the right practices lets the talent work effectively. Highly structured and slow traditional processes don't work for digital. There are no tried and tested models to implement, but every organization has to find the practices that suits it best.
"Some organizations may shift to a product management-based approach for digital innovations because it allows for multiple iterations. Operational innovations can follow the usual approaches until the digital team is skilled and experienced enough to extend its reach and share the learned practices with the organization," Mr. Blosch explained.
Barrier No. 6: Change Isn't Easy
It's often technically challenging and expensive to make digital work. Developing platforms, changing the organizational structure, creating an ecosystem of partners — all of this costs time, resources and money.
Over the long term, enterprises should build the organizational capabilities that make change simpler and faster. To do that, they should develop a platform-based strategy that supports continuous change and design principles and then innovate on top of that platform, allowing new services to draw from the platform and its core services.
Gartner clients can find more information in the research note "Six Barriers to Becoming a Digital Business, and What You Can Do About Them." More information on digital business can be found in the Gartner Special Report “The Resilience Premium of Digital Business.” This collection of research focuses on how committing to resilience will equip a digital business with the mindset, resources and planning to recover from inevitable disruptions.
How do you know your digital transformation project is - or is not - going to plan? asks Nick Keen, Project Manager at Cloud Technology Solutions.
According to a number of surveys, 9 out of 10 digital transformation projects fail in some way, with over half being quite serious failures. At Cloud Technology Solutions, we know if our projects are going to plan if we have understood and observed the following:
Purpose and Customers Needs
It’s crucial to identify what it is the customer wants to achieve through digital transformation and why, what’s important to them and what their success criteria are. Understanding this and knowing that your product/service is a good fit for the business already puts the project in a good starting position for success.
Business Case and Statements of Work
Both the Business Case and Statement of Work are essential for securing buy-in from senior management and all those involved with the project. Both the business case and statement of work need to be clear and concise and must be in-line with business strategy and priorities. Clearly defining what it is we are doing, and what we are not doing, ensures a robust project kick off with the best chance of everything going to plan.
Success and Leadership
If the reason for the project is well founded, it is simpler to define the success criteria and will be closely linked to the purpose. It’s also important to look at success throughout the implementation itself, such as user and business readiness and satisfaction, engagement and training. We know a project is going to plan if this success is measured throughout the project lifecycle, maintaining leadership, support and accountability.
On-time, On-budget, On-schedule
If we understand the business purpose and needs, we have a strong business case, a detailed set of objectives and good leadership, we know the project will start off well. The project then needs to run in-line with the schedule, remain on budget and within the agreed timescales.
How do you spot the warning signs early enough to avoid disaster?
Communication
Some of our projects are huge, with some taking nearly a year from start to finish. To that end, it is vital that we identify issues early to avoid failure in our delivery.
Lack of communication is one of the biggest problems in most projects and it is important to ensure that communication with all stakeholders is clear, regular and accurate. If members of the project aren’t talking, there’s an increased chance of miscommunication or mismatch of expectations.
Project Interest
The projects we deliver, such as Google G Suite implementation, represent big cultural changes for any business. There therefore needs to be a high level of interest from all stakeholders to ensure the change management element is driven forward and is successful. If there’s a lack of interest and management support at the start of the project this will likely feed through to all staff and undermine “buy in”.
Velocity
There needs to be a number of deliverables at frequent intervals, especially on the bigger projects that last six months or longer. This not only makes tracking the project easier but drives a feeling of success too. This approach goes hand in hand with our resource planning and timeline where each week we will have deliverables against the bigger plan. A classic warning sign of a project in trouble is when things just aren’t moving!
Poorly defined objectives
All members of the project team must understand why they are doing what they’re doing and what they are hoping to achieve. If this isn’t clear from the beginning, the project will be at risk. Objectives should always be clear, concise and detailed to avoid failure or mismatch of expectations.
How do you save the project - and maybe even your job?
This depends on the problem you have to solve...
Scope Creep
Sometimes projects grow beyond original goals and need to be constantly re-assessed. If this happens, it’s important to firstly figure out what you need to do to bring it back into scope, and if you can’t, ensure a change control is in place and that everyone is onboard. Look at where previous similar projects failed and look at the lessons learnt.
At CTS, our approach is to collaborate with the customer. Scope change is not inevitable, but it happens regularly enough that both the supplier and customer will need to adapt to it once it is recognised, understanding the impact that it has on the project.
Budget
This is something that needs to be regularly monitored. Budgets generally only change due to changes in scope or the original assumptions. If budgets change there may have been scope change and that needs to be agreed with the customer to ensure there is mutual understanding of the impact that this has on the project. The earlier this is detected and raised with stakeholders, and even your manager, the better.
Behind Schedule
Assess why the project is behind. Maybe project members are off sick, unplanned leave, a complicated task that took longer to complete? It’s important to devise a plan that doesn’t involve having to assign additional resource to the problem. Prioritise the needs and scale back on objectives. Working harder and faster also introduces mistakes, potentially putting the project further behind. This can be avoided by working out how to be flexible when working towards key milestones. Good project management software is useful for alerts and scheduling and for helping you keep the project on track.
Communication (again!)
Changes in scope, timescales, budget and outcomes must be understood by all participants in the project. Keeping clear, open, non-adversarial communications flowing has to be the primary focus of the project manager. In our experience, our customers can tolerate changes to project timelines, budgets and outcomes, if the causes are reasonable. What they cannot tolerate, is poor communication.
Are the risks in IT projects the same as they always were (or even worse)?
Our projects have changed. Technology is no longer an “outcome” of a project. The technology no longer fails. The outcomes now are more focused on greater efficiencies and savings. These can only be achieved by staff in our customer organisations, who need to change the way in which they interact with technology.
For instance, in two recent projects, we implemented the exact same technology in two different organisations. One organisation has reported a 50 per cent saving in staff travel costs as a result of adopting the collaboration tools. The other organisation has made negligible savings as the organisational change was not supported internally to the same extent.
The risks to our projects now are greater than ever before as most of our success criteria are based on wholesale organisational change, not simply the delivery of some tech.
With the growth of new technologies - AI, robotics, machine learning - will successful IT implementation become even harder to achieve in the future?
Emerging tech like AI, robotics and machine learning are advancing very quickly and I don’t think there’s been much attention to the impact on the digital implementation projects that we see today.
The idea behind these new technologies is that they can improve the speed, quality, and cost of a project or process and maybe even replace human resource in certain areas in the future. That represents a significant challenge to staff, which could see such tech development as a threat. Success of these type of projects requires the impacted staff to recognise and embrace the changes. The power of AI and machine learning can only be harnessed if staff recognise that their role is changing and therefore start to use the data and output from the technology to inform organisational and strategic changes.
The Internet of Things (IoT) is having a huge impact, with experts anticipating a significant increase in adoption of the technology, particularly across the enterprise. Growth Enabler predicts that the global IoT market is set to grow to $457.29 billion by 2020 driven in part by the acceptance, adoption and business applicability of IoT across numerous sectors. By Jason Kay, Chief Commercial Officer, IMS Evolve.
According to research from Vodafone, the number of IoT adopters across all industries has already more than doubled since 2013, equating to 29 per cent of organisations globally and demonstrating how compelling the case for IoT is to businesses. However, the results from Cisco’s IoT deployment survey show how decision makers must place greater emphasis on business objectives when implementing an IoT project; 60 per cent of IoT initiatives stall at the Proof of Concept (PoC) stage and only 26 per cent of organisations have had an IoT project that they believed to be a complete success.
The conclusions of the Cisco survey may appear stark for some. IoT technology was developed with the aim of improving business efficiency and increasing value for adopters, however according to the research, it appears the full potential of IoT is so far, not being truly realised. If businesses do not achieve full value from their IoT deployments, then the growth rate of IoT could be at risk, and as a result businesses could potentially become hesitant to adopt it.
In reality, IoT has the potential to truly revolutionise technology and provide businesses with invaluable benefits, including increased efficiency and greater insight. However, it is vital that organisations and the technology vendors they collaborate with, take a considered approach to guarantee the future success of their project – and this requires a particular mindset.
In many cases, the goal of an IoT implementation is dictated by the technology. But just because the technology can provide a solution for a business, it doesn’t necessarily mean it’s the right one. As such, a shift in mindset is required across the IoT industry to prioritise the essential question when considering an IoT project: ‘why?’
Rather than putting the technology first and asking the question, ‘what technology is available and what problems can it solve?’ stakeholders should prioritise the business issues and ask, ‘what problems do we have, and how can technology help?’. Organisations constantly face challenges on multiple fronts, and collaboration across multiple teams and functions within the business is required to truly determine which solution implemented in which area will contribute the greatest effect on core purpose, and provide the biggest reward.
Consider the rip and replace IoT solutions as an example. There is an opportunity for industries such as food retail to achieve substantial benefits and efficiencies from implementing IoT technology. However, a rip and replace solution to extracting the data from their estate would not be feasible as the industry operates a fast moving environment with low margin consumer goods and a high cost of infrastructure.
Suspending operations or shutting stores for re-fit risks impeding customer experience, loyalty and brand reputation, and diminishing the potential business value of the solution. As revealed by the Cisco research, the value from IoT must be generated quickly, with as little disruption to the business as possible, or the project will quickly stall. By leveraging the existing infrastructure and deploying an IoT layer across the existing environment, organisations can unlock the data that is inherent within it and achieve ROI and tangible value within a matter or months, even weeks. Rapidly releasing value from IoT is essential in adopting an outcomes-led strategy and accelerating the shift into digitisation with low capital investment and high return on investment.
With full business-wide collaboration and an outcomes-led approach, businesses can partner with vendors that can rapidly deploy sustainable, scalable and – above all else – valuable IoT projects. It is only through organisations deploying established and repeatable solutions that deliver results far beyond proof of concept, that IoT will be able to fulfill its true potential.
Managed services providers (MSPs) must see the wider picture in order to provide the strategic vision that resonates with customers. That strategic partner status is a vital part of the relationship but needs nurturing as part of that all-encompassing vision.
It is clear from research by Gartner and others that MSPs globally are facing a multi-faceted struggle to grow their businesses. On the one hand, there is competition from the public cloud players such as AWS, Google and Microsoft who set pricing levels and have reach but are unable to customise or tailor their offerings to specific verticals. The bulk of MSPs are much smaller, and specialist in technologies covered and markets, but need scale to build their profitability and have limited resources.
There is a clear move to consolidate among larger players, with Gartner saying that, when compared to 2017, the entry criteria have become much harder and stringent. The focus has squarely shifted to hyperscale infrastructure providers, it says, and this has resulted in it dropping more than 14 vendors from its top players list. According to Gartner, there are no more visionaries and challengers left in the market; only a handful of leaders and niche players driving the momentum.
At the other end of the market, among smaller players, the pace of competition has stepped up and they are feeling a major pressure to differentiate, either on skills, markets covered, geographical coverage or in customer relations.
This is a common feature of the MSP market on a global scale. The answer is always to build and then demonstrate expertise and understanding in the marketplace. As Gartner's research director Mark Paine told April’s European Managed Services Summit: “The key to a successful and differentiated business is to give customers what they want by helping them (the customer) buy”.
One way to win more customers is by showing them their place in the future, according to Jim Bowes, CEO and founder of digital agency Manifesto, and Robert Belgrave, chief executive of digital agency hosting specialist Wirehive. The two experts will be covering the marketing aspects at the UK Managed Services & Hosting Summit in London on September 29th. Agenda here: http://www.mshsummit.com/agenda.php
They draw on a wider experience, arguing that the managed services and hosting industry isn’t the only one having to undergo rapid adjustments due to technological advances and changing customer expectations. Customers are experiencing the same disorientation, and need help figuring out how their IT infrastructure needs to evolve over the next five to ten years. Which means it’s time to ditch the old marketing models built on email lists and dry whitepapers. It’s time to get agile, personalised and creative, they will say.
A key part of the event will also be hearing from the experiences of MSPs themselves and looking at established winning ideas. MSPs already confirmed will relate their stories on business-building including how they use managed security services, how MSPs can position security without terrifying the customer, and how they work with customers to keep their lights on.
Now in its eighth year, the UK Managed Services & Hosting Summit event will bring leading hardware and software vendors, hosting providers, telecommunications companies, mobile operators and web services providers involved in managed services and hosting together with Managed Service Providers (MSPs) and resellers, integrators and service providers migrating to, or developing their own managed services portfolio and sales of hosted solutions.
It is a management-level event designed to help channel organisations identify opportunities arising from the increasing demand for managed and hosted services and to develop and strengthen partnerships aimed at supporting sales. Building on the success of previous managed services and hosting events, the summit will feature a high-level conference programme exploring the impact of new business models and the changing role of information technology within modern businesses.
You can find further information at: www.mshsummit.com
Artificial Intelligence has lately become one of the most fashionable of technical buzzwords. Historically, though, it has often been used by researchers in the field as an aspirational term to reference technology that doesn’t quite work yet. Behind the cynicism, though, is some technology that really does work and really can do useful tasks. That technology is machine learning, which can be viewed as just a new way to program computers to perform tasks that we don’t quite understand (yet).
By Ted Dunning, Chief Application Architect, MapR.
One interesting way to look at machine learning is to view it as writing a program based on data rather than explicit coding. This is revolutionary compared to standard and can be incredibly powerful when applied to problems where we have lots of the right kind of data and where we don’t entirely understand how to solve a problem explicitly. Through the process of machine learning, we can mathematically solve for a program that can do something that we can’t code.
Let’s take a real-world example of the power of data. One of our customers, a financial service provider, was hit by a major denial of service attack but the security team had archived headers on all web requests for some time by the time of the attack. Having large volumes of detailed data allowed that security team to find a systematic difference between the data flooding in from the attackers and data coming from ordinary users. That difference could never have been noted without the archived data. This was an example where having lots of data turned what would have been an intractable problem into one where an important solution could be inferred from that data.
This all sounds great, but there are some really major gotchas. First of all, the learning itself, the part of the process that everybody is so breathless about is actually only a small part of the problem. The biggest issue is to find a suitable candidate for solving with machine learning. The second biggest issue is the logistics surrounding the deployment and management of models.
Finding a way that machine learning can contribute to a business is both much harder and much easier than it looks. It is much harder because many tasks don’t have clear enough definitions or good enough opportunities to collect the data needed to train a model. It is also much harder because the resulting model not only has to do what the old business process used to do (with acceptable accuracy and such), but it also has to do it in such a way as to improve the business outcomes by a large enough amount to make it worthwhile to build the requisite model and maintain it. For instance, if you have a lemonade stand that brings in $2 each weekend during the summer, adding a machine learning model that improves sales by 20% isn’t going to be enough to pay for a data scientist. That is unless the data scientist is your little sister who will work for a cut of the lemonade. That said, most reasonably large business’ have lots of places where machine learning can make a big difference … it is just that the best opportunities won’t be where you look first.
Dealing with the logistics of deploying and maintaining models is the other major under-recognised challenge of machine learning. Once you know a good niche in your business for a model, you can expect 90% or more of the effort of getting that model into production to be devoted to the logistics of handling data and deploying the model. The biggest reason for this is that a model isn’t really like an ordinary program. The biggest reason for this is that because a model is learned from data to solve a problem that you probably don’t entirely understand (or else you would have just written a program to solve the problem), you probably also don’t know how to test the model except by comparing the results it produces in production. Essentially, you will know a better model when you see one but you can’t say ahead of time which model it will be.
The result of this situation is that, except in the simplest of model applications, it is common to deploy multiple versions of a model. One model, the “champion”, is considered the best performing model. Others, the “challengers”, are either previous champions or are newer models that we think might be good enough to unseat the current champion. It is also common to keep an older champion around as a canary so that we can compare the outputs of different models over a long period of time to get an idea of trends and changes.
Maintaining all of these model versions can be challenging. One approach to this is detailed in the latest book that Ellen Friedman and I have written entitled Machine Learning Logistics. In this book we describe the rendezvous architecture that makes it relatively painless to manage, compare and monitor all of these models, deploying new models and retiring old ones all while maintaining zero downtime.
About the author
Ted Dunning is Chief Application Architect at MapR and has years of experience with machine learning and other big data solutions across a range of sectors. Ted was the chief architect behind the MusicMatch (now Yahoo Music) and Veoh recommendation systems. He built fraud detection systems for ID Analytics (later purchased by LifeLock) and he has 24 patents issued to date plus a dozen pending. Ted has a PhD in computing science from the University of Sheffield and is active with open source projects as committer, PMC member, mentor and currently serving as a board member for the Apache Software Foundation. When he’s not doing data science, he plays guitar and mandolin. He also bought the beer at the first Hadoop user group meeting.
As consumers, we live in a digital ecosystem almost without even realising it. We interact with brand touchpoints many times a day, and in each of these our brand perception is shifting and evolving. More than this, rather than buying products from companies, we buy into their brand. Customers for any product or service are now expecting the personalised touch, and a tailor-made customer experience to be delivered through their screens.
By Ian Matthews, Data Evangelist at NGDATA.
The explosion of big data has therefore presented businesses with the opportunity to rethink and reshape their brand attitude and tone. However, it has also posed a more pragmatic challenge of how to maximise the potential hidden in the huge data sets they have been building over years. This is where customer analytics comes into play. Analytics can turn the massive amounts of data you already have on your customers into insights. Keep in mind that this is something different than traditional analytics or business intelligence. Whereas traditional analytics look into the past to predict trends, customer analytics look for what a customer will do as an individual.
The ability to leverage and maximise the value of this treasure trove is now more mission critical than ever. Building relationships with customers now involves with more frequent, but far shorter interactions. Yet it is on this the brands rely to build the lifeblood of any company – brand loyalty. Customers have more access points than ever before to their service provider. They can interact via a company’s website or app, through phone calls, emails, texts, and via social media and chat. These kinds of interactions take less time than an in-person visit, but they happen much more frequently. They are also more superficial than in-person interactions, making it even more important to have each one count.
Companies need to take advantage of all of these touch points and interaction moments to provide real value. The only way to do this is to be relevant in ALL areas of messaging, timing, and to ensure these are relevant to the customers’ context.
But how does one become relevant? The first step is understanding who your customers are.
Let’s put it in Context
Customer data analytics are powerful. Companies today have access to layers upon layers of information about their customers, from location and age, to spending habits, to attrition tendencies, to product and communications preferences, to behavioural intelligence.
Consider a recent report from IDC which predicts that data creation will reach a total of 163 zettabytes by the year 2025, this is a ten-fold increase in worldwide data. Companies have so much data at their fingertips, but unless that information is channelised and used effectively, it’s useless.
Contextual relevance is the most important factor in extracting value from data. Understanding human behaviour, but not being able to apply it to common-day context, such as where someone is or what were their previous actions in the last hour, will only allow a company to better understand what has happened in the past. There is little value in living in the past, when a business’s value is counting on what happens in the now. This is especially true when considering a customer’s experience. If the content ignores a customer’s context, the brand’s marketing efforts will be futile, the needs of their customers will go unmet and customer relationships will diminish significantly.
Real-time reactions
So how does a company use their data to deliver relevance? The answer boils down to having actionable insights available at your fingertips. In order to leverage knowledge on the customer context, companies must move to a process which combines long-term historical insights with up-to-the-minute processing of real-time behavioural data. Companies must then use the enormous amount of existing user data to constantly create connected user experiences; as compared to those created by technology companies that are dependent on web traffic activity and information alone. Productively utilising customer data allows a company to determine what a customer is most interested in, and to create a personalised experience where content, products and/or services are presented to customers before they even realise their needs. As customers’ expectations of their favourite brands increase, they have more of an affinity for those that offer more pertinent information, instruction and added convenience to their lives.
Companies also need to let marketers truly manage the conversation with the customer, both in- and outbound. Companies can do this by using their data to deliver the most relevant, timely and contextually-aware actions that match the needs of each and every individual customer. When a company or brand can execute on the insights gleaned, they’ll become transformative in the way they approach marketing.
Taking the customer journey full circle
By leveraging the context found in data, brands can provide customers with an optimised experience that is more relevant and consistent across all channels. Personalised service can be adapted to their present needs and interests, and tied to the complete customer context – their location, most recent purchases, complaints, etc.
For a company, the value of processing customer data with analytics software can be about optimum marketing results, with more precise targeting, more connected experiences and increased campaign efficiency. This gives companies the ability to acquire the right customers, provide them with excellent service and products, and be more focused on which customers to retain, at what cost. The value of customer data is beneficial for both the customer and the service provider as they truly begin to operationalise insights on customer data and behaviour.
But crucially, data in itself isn’t smart – it takes a level of processing to develop the insights that drive businesses forward. Companies can only utilise data to its full potential by harnessing and harmonising the assets at their disposal, unifying multiple entry formats, systems, and processes. Only once this has been achieved can the power of AI-powered programmes and platforms drive actionable insights to boost customer engagement. Creating end-to-end systems such as these mimics the customer journey in itself, turning dumb data into true connection between business and customer.
So, how is your business building a marketing segment of one that will secure a plethora of loyal customers for the future?
Despite the rapid rise of public cloud platforms and the various benefits they offer to enterprises, private infrastructure still remains an essential component of many IT strategies.
By Mark Baker, Field Product Manager at Canonical.
This is an idea that many enterprises have traditionally been hesitant to embrace, primarily due to the costs involved. However, recent research has shown that private infrastructure doesn’t have to require the vast investment it once did and can now actually be just as cost-efficient as the public cloud.
What’s more, running private data centres and clouds alongside public platforms enables companies to keep using the infrastructure that they have been investing in for years and which is customised to meet their specific needs. It can also provide reassurances around security and data protection, both of which are key considerations for every organisation in the era of GDPR and stringent compliance requirements.
Adopting a multi-cloud approach that uses a combination of public and private platforms means businesses can run workloads where they are best suited. What’s more, they can be much more flexible in responding to capacity needs and maximise the return on their cloud investment.
As a result, we’re now seeing more and more enterprises turn to multi-cloud strategies, giving them the flexibility and agility required to operate in today’s digital world.
However, such an approach only works if businesses are able to run a cost-efficient data centre themselves, which is not something that all enterprises are able to say.
Dirty data centres
The use of private infrastructure is nothing new. Many organisations have been running their own data centre for years, particularly in industries such as financial services that have strict regulatory requirements around the storage and use of sensitive customer data.
However, a closer look at the inner workings of these data centres often reveals a huge amount of over-expense, accompanied by a tangled mess of machines and wires that very few people properly understand or know what to do with.
This is not unusual when a data centre has been operational for several years. We all know how cluttered computer wires can get in our personal lives and this issue is amplified exponentially when talking about a data centre containing hundreds, thousands or even hundreds of thousands of servers. After all, taking the time to regularly tidy everything up is a luxury few businesses can afford, especially if it requires some system downtime.
Internal knowledge gaps can also be a problem. There may only be a handful of people within the business who are fully aware of the data centre’s inner workings, which could present some serious issues if these people leave the business or aren’t on-hand to respond to an emergency.
Finally, businesses with dirty data centres are unlikely to be getting the best return on their investment. Infrastructure inefficiencies can add significant expense to data centre operations and internal processes, thereby impacting employee productivity and, ultimately, the bottom line.
The result is that many businesses are being tempted into ditching their private infrastructure in favour of public cloud platforms. That way, they can hand off the time-consuming and expensive maintenance jobs to someone else, leaving them to focus on growing their business. But is this really the best way forward?
Spring cleaning
So, data centres can get messy, that’s something no organisation can avoid. For some, this creates the feeling that the public cloud is the only real option and that their own data centres are not worth the hassle.
However, thinking this way would be a mistake. Rather than neglecting them, businesses should be focusing on cleaning up their private infrastructures and re-crafting a leaner, cleaner data centre.
Not only does this have the potential to significantly improve any enterprise’s return on investment, it can also bring private cloud economics back in line with the perceived cost efficiencies of using public cloud providers.
This is where automation plays a key role. Through automation, enterprises can simplify processes and eliminate time-consuming manual operations. The more day-to-day tasks can be automated, the more businesses can remove the administrative burden that has traditionally hampered many data centre operations.
This would free up IT teams to focus on making improvements and bringing value to the business, rather than having to spend time fighting fires and getting bogged down in the nitty-gritty of data centre management.
Other technologies such as machine learning or artificial intelligence can also be incorporated. This can provide insight into operational efficiencies, as well as enabling businesses to optimise their data centre’s performance and save money in the process.
Another option for businesses is to partner with providers and use their expertise to run certain parts of the data centre, which can go a long way towards streamlining internal operations.
Ultimately, cloud economics simply don’t point towards private data centres disappearing any time soon. Moving exclusively to the public cloud would be about as sensible as a business selling all its buildings and only renting a property whenever it needs somewhere new.
Whatever enterprises may think about their cloud infrastructure, it just doesn’t make financial sense for private data centres to go away, it just makes sense to clean them up.
That way, businesses can reap the rewards that come from running an efficient data centre and maximise the return on their cloud investment.
It is astonishing when we take a moment to ponder the prominence and influence of technology in our daily lives. Not only do we rely on it for everyday services, information, and connection, but we expect technology to deliver these experiences instantly. It’s become perfectly commonplace to “uber” a ride and watch in real time as the driver approaches, stream content on demand, or receive notification that someone is checking out your dating profile at the exact moment you’re checking out theirs.
By Priya Balakrishnan, senior director of product marketing at Redis Labs.
As ubiquitous as technology is today, the full impact of digital disruption has yet to be felt. Some industries—gaming, media, retail, commerce to name a few—are further along as a whole in their digital transformation journeys, while others—banking, government, healthcare, insurance—are a few steps behind.
But regardless of industry and distance travelled thus far, it’s become clear that providing innovative, responsive, real-time digital experiences requires a fundamental shift in application development and delivery. This shift has arrived in the form of microservices, an architectural approach that structures an application as a distributed collection of loosely coupled services.
In a microservices architecture, services are fine-grained and can be updated or scaled independently of one another, allowing enterprises to accelerate delivery, more efficiently reuse code, and achieve much higher levels of fault tolerance, to name a few of the benefits realized with the modular nature of microservices.
As organizations are re-platforming and readying their back-end systems for accelerated digital transformation, these benefits have clearly proved compelling. So much so, in fact, that a recent survey of development professionals shows that 86% expect microservices to be the default architecture within five years, with 60% already having microservices in pilot or production.1
But as is so often the case, new technologies that solve problems in one area introduce new problems in others. And microservices are no exception. In this article, I’d like to delve into some of the unique challenges—and strategies to overcome these challenges—that a distributed microservices architecture presents.
Challenge #1: The High Operational Cost of Polyglot Persistence
In a microservices architecture, instead of employing one single database across the application, each service can decide its own storage. The flexibility of being able to choose the most efficient data storage method for the task at hand (e.g. a key-value store for authentication, a graph database for fraud detection, a session store for customer tracking, and so on) is one of the big draws of microservices.
But the fragmented data management environment that results from running a myriad of different data storage products, known as polyglot persistence, dramatically increases the complexity of both operations and development. It gives rise to the need for subject matter experts with specialized skill sets for each of these data storage tools. For example, the developer must master new languages and APIs. Similarly, the database administrator must learn new backup and recovery utilities; research new optimization techniques; and plan for different layers of high availability, performance, and data durability.
Overcoming the high operational cost of polyglot persistence
An evolving trend in databases is the emergence of multi-model databases, which have the ability to store and process structurally different data (i.e. data with distinct models) on a single database platform. A multi-model database not only solves a variety of different use cases such as cache, session store, message broker, high speed transactions, and so on, it also extends itself to serve as a document store, key-value store, graph database, search engine, etc.
The extreme extensibility and flexibility of a multi-model database effectively removes the operational and development hurdles associated with managing numerous disparate data structures. Polyglot persistence—and its tremendous benefits—are still achieved, but at the much smaller operational cost of a single, elegant database platform.
Challenge #2: Sharing Persistent Data In A Distributed Architecture
When you correctly decompose a system into microservices, each microservice is deployed independently and in parallel with the other microservices. Transactions are spread across multiple services and the only way for these services to communicate with each other is through their published interfaces. It’s precisely this modular design that allows one microservice to be updated with little risk of compromising others, and, therefore, allows for accelerated delivery and improved stability.
However, the nature of this modular design means that it’s now the application, not the database layer, that has to do the heavy lifting when it comes to sharing and synchronizing data. And, while not ideal, if two or more microservices need to share persistent data from the same database, careful and complicated coordination is required to ensure consistent views of the data are maintained and low-latency is preserved as the right snapshot of the data is communicated.
Effective methods of sharing persistent data from the same database
The best way to accomplish data transfer among microservices is through a message broker. Message brokers employ a publish-subscribe model for asynchronous communication between microservices. The publisher publishes that the state of the data has changed (along with the changes themselves) and the subscribers, which are in constant listening mode, update their internal states in response. This communications model is easy to use, memory efficient, and very effective at maintaining data consistency.
Alternately, if you have multiple instances of the same microservice, each with its own database in distributed data centers across geographies, then the service should share a consistent view of the data and converge to the same final state.
Solutions in existing database platforms range from strict consistency with limited high availability and throughput such as two-phase-commit protocol (2PC) for distributed writes to inadequate conflict resolution mechanisms using clocks that are completely based on as last-writer-wins (LWW). In distributed microservices architecture, careful consideration must be given to how updates are propagated across services, and how to manage consistent views when data appears in multiple places without strong consistency.
Strong eventual consistency guarantees that for a given set of updates all replicas of the data will eventually be converged to the same consistent state without implementing a reconciliation mechanism. This design allows applications to work with multi-region, multi-master database deployments, as if it were local. This approach is especially useful for long-living business operations.
Causal consistency eliminates the overhead and delivers the characteristics of a strongly consistent system by ensuring all replicas of the data see the updates in the exact same order that it was initiated.
Such tunable consistency is most efficiently achieved through active-active replication, based on CRDT (Conflict-free-Replicated Data Types) technology. In CRDT-based active-active replication, all database instances are available for read and write operations and are bidirectionally replicated. These (CRDT-based active-active) databases oversee data conflict resolution, alleviating the need for application developers to tackle consistency issues, and, regardless of where the microservice resides, the application gets local latency even across geographically distributed workloads. A microservices architecture emphasizes transaction-less coordination between services, with explicit acceptance of eventual and causal consistency and therefore is an ideal candidate for implementation under active-active database architectures.
Challenge #3: Managing container sprawl and data ephemerality
Microservices typically run in containerized environments. The portability offered by containers enables effortless relocation or replication of a microservice across heterogeneous platforms.
While a container in isolation can be quite easy to implement, applications conceivably consist of hundreds of containers spread across dozens of physical nodes and interdependent containerized services. Managing these large sprawls of containers creates tremendous operational complexity.
Furthermore, containers, by their very nature, aren't built to persist the data inside them - the state of the data is externalized. Some data is temporary in nature while other data must be retained and needs to be very durable. Persistence in the age of microservices means persistence of data and event streams. And this is critical as data is not ephemeral.
Running stateful services with container orchestration
In order to efficiently deploy and manage containers and microservices at scale, many organizations are turning to container orchestration tools. Kubernetes, an open source platform for managing containerized applications, has become the de-facto standard for container orchestration. It includes many primitives that are critical to scaling your database with ease, ensuring high availability, and managing schedule and balance load as necessary.
With Kubernetes playing such a vital role in the deployment of microservices architectures, you’ll want to ensure that your database is able to take full advantage of the Kubernetes framework to not only preserve the decrease in deployment complexity that the orchestration tool brings, but to also ensure that important data remains persistent and durable in the very dynamic and fast-moving environment of containers.
Challenge #4: Avoiding Performance and Scale Problems in Distributed Environments
A microservices architecture implies a distributed system. Where you previously had a simple method call in a monolithic application, you now have many remote procedure calls (RPCs) with which to contend. As a result, performance and scale problems are more likely to arise from issues such as network latency, fault tolerance, message serialization, unreliable networks, and varying loads within application tiers.
Shared-Nothing, Linearly Scalable Architecture
If the right database(s) are chosen for the environment, the microservices architecture lends itself quite well to massive data processing. A database with shared-nothing architecture inherently supports high throughput and latency requirements for large scale data processing. Scale-out architecture, multi-model, high performance, and reduced TCO should all be key considerations when designing a distributed database architecture. Linearly scaling databases that do not compromise performance reduce your chance of hitting scalability limits and performance problems that could arise as a result of a distributed architecture.
Your Database Makes a Difference
No two organizations will take the same digital transformation journey. But regardless of route or final destination, one common stop along the way is the adoption of microservices. While microservices have proven key in closing the gap between business and IT, they are not without their challenges. As your organization transitions to a microservices architecture, or matures its existing microservices environment, it’s important to carefully consider database design and the ways in which it can overcome—or exacerbate—the challenges that microservices present. In-memory, multi-model, high performing databases such as Redis Enterprise that offer high availability and data durability should be integral components of your microservices architecture, and can mean the difference between a successful digital transformation journey or the need for a navigation reset.
There’s an ongoing battle between digitalisation and the drive to consistently achieve the all-important human touch in customer service. Chatbots, automation and artificial intelligence are solving the woes of customer service teams — and customers — worldwide. Or so it would seem.
By Howard Williams, who leads the activities of Parker Software’s global customer team.
In reality, 64% of consumers feel companies have lost touch with the human element of customer experience. (This still in the early days of bots and AI implementation.) From baffled bots to automation absurdity, bad deployment of customer-facing tech can sabotage both the human touch and the overall customer experience.
So, how can you ensure that you’re using technology effectively, without losing the coveted human touch from your customer service? Howard Williams, marketing director at Parker Software, explores the importance of striking a harmony between digital and human.
The human touch in customer service refers to the understanding, flexibility and empathy with which customers want to be treated. Technology is efficient, but it often fails on all these subtle service fronts.
Up to 40% of customers explicitly want ‘better human service’. And implicitly, this means they want that human touch threaded across their customer service experience. Why? Because humans act (and buy) based on emotions.
When customers reach out to businesses, the emotional awareness of the agent makes all the difference. Only a human can pick up on emotional cues and adjust their messages accordingly – upselling to an invested customer or calming an angry one.
If your customers have a good experience, they’ll associate your business with a positive emotion, promoting trust. A bad or emotionless experience, meanwhile, does the opposite. Trust helps businesses cultivate customer loyalty, and forms a cornerstone of a great customer experience. Without trust, you won’t be needing a customer support team for long.
Unfortunately, the very nature of human service makes for unpredictability. Since human agents aren’t machines, they can’t work with machine-like consistency and efficiency. Nor can they remove emotion from the equation. (Emotion is, after all, how customers discern the human touch in a service interaction.)
Humans are fallible on the customer service front. They slip up, they become frustrated, and they can tire of tedious support queries and repetitive answers. This, then, is where digitalisation finds its place.
Technology that can handle routine customer service chores is growing ever more popular. In fact, it’s been estimated that by 2020, 85% of customer service interactions will be handled without a human. The digitalisation of customer service is gaining ground.
Chatbots, AI and automation are the trio establishing themselves into the customer service scene. A huge bulk of customer service can be handled between the big three disruptors.
Automation, for example, is routing customers to the right place. It’s collecting and processing their data, taking over admin tasks, issuing triggered comms, and generally automating the processes that come with keeping customers onboard. Meanwhile, AI is getting to know your customers. It’s analysing their wants and needs, and powering new levels of insight. As for chatbots, they’re the toast of the tech town. Bots are being used to interact with your customers in place of a human agent – and they’re good at it.
All these technologies, in fact, are great at what they do. For agents, smart customer service tech means less wasted time. And for customers, it means getting prompt, personalised service— no matter when or where.
The problem is, technology isn’t a catch-all customer service replacement. Software designed to deal with humans, such as chatbots, simply cannot offer the nuances of human touch. Even smarter machine-learning chatbots can struggle.
They might be capable of replicating friendliness. They can be programmed to be respectful and they don’t get provoked by angry customers. But when the respect isn’t enough, the lack of flexibility and empathy will upset more than one apple cart.
Let’s start with flexibility. Customer service flexibility is important for your customers — it’s the ability of your agents to ‘bend over backwards’ to help them. It’s the way your human team can think around problems and hurdles, and offer acceptable compromises. Unfortunately, flexibility isn’t something created by following set rules, like those that automation and rule-based chatbots need to function.
Then, there’s empathy. There have been developments in AI that help chatbots understand emotion, seen in the emergence of sentiment analysis. However, while chatbots are beginning to learn how to recognise upset, angry, happy and satisfied customers, they can’t yet respond appropriately.
A chatbot might be able to offer an upset customer a robotic apology, and some form of compensation, but they can’t give emotional understanding. Instead of appeasing the customer, this can come across as dismissive of the customer’s feelings and situation, making for a bad customer experience. While technology can help monitor the overall tone of customer interactions, humans still need to provide the empathy.
All this doesn’t mean that your chatbot should be scrapped, your AI unplugged and your automation software uninstalled. After all, they provide great benefits to your customers and your team alike. The key is in striking a balance between digitalisation and the human touch.
This balance is about recognising one key fact: customer service technologies are tools for your human team, not replacements. It might be tempting to opt for your chatbot over Clive from customer service or Tracy from tech support, but it will hurt your service in the long run.
There’s no need for digitalisation and exciting new tech to exterminate the human touch. You can make use of the benefits that technology offers, without upsetting the customer service apple cart.
With a focus on the consumer, their experience, and how it can be continually improved.
Exasol has improved the democratization of data at Piedmont, opening greater access to more decision makers who have used data to improve the running of their healthcare practice.
Piedmont is a not-for-profit healthcare provider consisting of 8 hospitals and 1674 beds based around the Atlanta Metro area. In 2016 alone they served over 2 million patients, with plans to increase its footprint further in 2018 with the addition of 3 more hospitals to the network. The business intelligence and data architecture teams are comprised of 24 people, who are responsible for compiling and disseminating the healthcare provider’s enormous amount of raw and highly sensitive data to its planning and analysis groups, and relevant hospital decision makers.
The Initial Situation Healthcare providers store a colossal amount of data in the form of decades of patient information, gathered before the real birth of data analytics, and before the concept of “big data” even existed. Piedmont alone had over 22,000 fields to analyze gathered from around 30 different published data sources. Multiplied by the number of records available Piedmont had to extract value from over 555 billion data points.
Piedmont wanted to take their processing and understanding of its available data to the next level. Thanks to an aggressive growth strategy greatly increasing the size of the business, Piedmont ran into significant scalability problems with their old SQL Server system, resulting in mounting security, speed and cost issues when analyzing its data.
Piedmont outlined its requirements for a replacement with a set of guiding principles:
Data should be democratized and disseminated within the organization
Data should be immediately available to decision makers
Reports containing data from internal systems should be no more than 24 hours old
Visualizations should seamlessly enable the end user to explore data from summary level detail
Reports should be consolidated at a single portal using a uniform presentation layer
The Solution After over eight years of using Tableau for all its business intelligence purposes, Piedmont selected Exasol to be the beating heart of all its data analytics needs, while keeping Tableau’s front-end user interface for its visualizations.
Piedmont’s topology consisted of EPSi, healthcare financial decision support software that acts as an aggregator for all healthcare systems. It also included Epic, an electronic healthcare record system which was the source of a huge amount of data. Exasol was used to replace the data warehouse component, both the core repository and also systems aligned to Piedmont clinic (a sub organization of Piedmont) and the Piedmont Physicians group.
Piedmont set itself the goal of inflicting zero harm from the time patients entered its facilities to the time they left, seeking to protect them from any further infections and reduce the recidivism rate to as close to 0% as possible, a hugely audacious and ambitious goal for a healthcare provider.
Executive leadership puts analytics at the heart of healthcare with Exasol
Having Piedmont Healthcare’s executive leadership focused on data was the first step towards enacting a slew of datacentric projects, which have now permeated down through the organization. Now, anyone within the organization can get access to see certain data through Tableau Server and 311 members of staff are actively using Tableau Desktop -
much larger than the original business intelligence and data architecture teams of 24 people. With this democratization of data has also come an increased allowance for staff autonomy when it comes to using data to improve processes.
Infection Prevention with Exasol
The data architecture team has now progressed onto more ambitious projects using Exasol. Chief among them is the infection prevention dashboard. This involved looking for lab results showing infections and using data to work out the root cause. This illustrated whether an infection was something that was acquired in the hospital and how.
The issue with the previous system was that the data was poorly structured and spread across six different tables in the database. It required intensive processing to extract and analyse the data, which meant the results were not ready until 1.30pm the next day. The upshot was that staff were working on data already a day old - an unacceptable situation in infection prevention. However, with Exasol powering Tableau hospital staff at all levels could review yesterday’s information from 8.30 in the morning.
Meeting the speed at which questions could be answered
The broadest change of the increased speed of analytics enacted was in productivity, specifically in the speed at which questions could be answered. Conducting meaningful macro-analysis required staff to return to their desks and spend hours pulling together information. With the speed of Exasol’s in memory analytic database, questions raised in meetings can now be answered immediately, and that time repurposed to create solutions for emerging problems, delivering better healthcare to patients. Even with constantly growing patient numbers, data can be sourced and utilized in a very short time.
Mark Jackson, Director of BI at Piedmont Healthcare: “By placing Exasol at the beating heart of our analytics we have seen significant improvements, not only to the organization’s bottom line, but to the satisfaction and safe delivery of our services to patients. Internally, it is also helping Piedmont staff deliver a better service, increase productivity and this results in a happier working environment. For me personally, it has taken a lot of stress out of the day to day. When we need answers, we can get them in near-real time. We can trust in Exasol to deliver results, at speed and are confident in its ability to scale for our future. Working life has much-improved since we implemented Exasol.”
Conclusion
Exasol has improved the democratization of data at Piedmont, opening greater access to more decision makers who have used data to improve the running of their healthcare practice.
Exasol’s implementation has permeated into many different areas of Piedmont’s business, making it a truly data-led organization. Hospital care quality, operation outcomes, patience satisfaction have all improved as a result of transforming into a data driven healthcare provider. Previous data models maxed out at a size of 100 million records. Today, with Exasol, there are data models with over 2 billion records immediately available for analysis. Exasol can scale and grow in step with demand, and it is ready to support Piedmont Healthcare’s ambitious growth strategy for years to come.
He-Man may have been a master of the universe, but us mere mortals may have to set our goals slightly lower. For many network managers, the network is their own mini universe, but unfortunately, they often don’t have the control or visibility over it that they would like.
By Chris Gilmour, Technical Practice Lead, Axians.
Software-defined WAN (SD-WAN) has been identified by some as the solution to this problem, solving businesses’ digital transformation and legacy infrastructure woes. But it is important that this, like any hyped B2B technology, doesn’t attempt to be a silver bullet solution to these complex issues. Prince Adam used the “Power of Grayskull” to transform into He-Man, but sadly the business world is a bit more nuanced.
For me, SD-WAN is really about the agility and flexibility that application level control of the network enables. Today’s IT managers demand control and visibility for good reason. With many businesses embarking on digital transformation strategies, it is crucial to have real-time actionable insight into network performance, and the adaptability to support the future demands that users will place on the network. Without this, poor performance or even outages could arise and you risk losing the customer loyalty that you’ve worked so hard to build up.
The race towards digital transformation
He-man was known for his great speed and strength, but he would defend with his intellect and strategy. Businesses across all sectors are committing to fast-developing digital strategies to enable a more engaging and positive customer experience, but need to support these broad objectives with the finer details that will enable long term success.
Perhaps the most crucial of all these details is the network. In the case of many network managers, legacy technology is holding back their organisation from their digital transformation goals. Yes, a lack of skills and a resistance to change at board level can also lead to a digital transformation roadblock, but so many businesses are embarking on bold digital strategies with a network that just isn’t fit for the modern day.
Connecting your application “islands”
By simply embarking on a digital strategy without the right network to support it, you’re essentially building intelligent “islands” that communicate with each other in a very rudimentary manner. To work towards real success in digital transformation, the network must be a central component of the whole process, not just an afterthought. Replacing legacy equipment and improving your mastery of the network with next-generation technologies such as SD-WAN can help businesses achieve their digital strategy goals.
At a basic level, digital transformation creates an entirely new set of applications for the network to deal with. If the network cannot differentiate between those services, then your business is essentially being held back by part of your infrastructure. Deploying application-centric networking such as SD-WAN allows you to, in effect, tune your network, allowing the priority traffic to behave the way it needs to.
Master of the universe (or maybe just your network…)
Many IT managers want more control and visibility over their networks. Currently this visibility often solely extends to a report at the end of the month detailing how much bandwidth you’ve used, or what applications have been used the most. However, this is retrospective and doesn’t give enough insight to allow you to tune your network as you go. Next-generation WAN enables reporting that tracks and analyses end-to-end application performance in real time, so the network can quickly react to any changes.
Customer complaints about slow application performance are a major and frequent problem, which means that major network improvements must be made if businesses want to improve customer experience. SD-WAN works in conjunction with next-generation applications to help them operate properly, so that the business receives the full benefit of adopting those applications in the first place.
The power of a software-defined future
It’s also becoming increasingly clear that the early adopters of artificial intelligence (AI) and machine learning (ML) will dominate the business landscape in years to come. However, there are also intriguing use cases for processing the masses of data points produced across the network and made visible through the analytics engines that many SD-WAN solutions have inbuilt.
As a result of the application visibility and analytics capabilities of SD-WAN, some businesses are now collecting millions of pieces of information from every part of the network. When harnessed and analysed correctly, this information is invaluable when it comes to understanding the impact of new services, and monitoring the usage and performance of applications, allowing greater insight and leading the way to future automation.
Galactic Guardians: The importance of a trusted partner
A common catalyst for implementing SD-WAN is a desire to cut costs within the business. However, to view it simply as a money saving exercise is to miss the point slightly. We find that businesses that are becoming increasingly application-centric are successfully embracing SD-WAN because of the control and flexibility it gives their network.
But the transition can’t be done alone. If you are migrating existing services and existing networks from an existing service to an SD-WAN service, this requires a lot of project management and consultancy to make it a success. As part of the process, it is crucial to properly plan out what you want from it as a business, and track this with sensible metrics. A slightly inferior solution, implemented properly, will be better for you as a business than a technically superior solution implemented poorly. Therefore, selecting the right partner is all-important.
When deployed correctly and for the right reasons, SD-WAN can be your guide towards digital transformation, and in the shorter term, control over your network. However, with great power comes great responsibility; use it wisely! And watch out for Skeletor...
How confident are you that your employees are storing files on company-approved platforms? Are you sure they store each file where they should? If your answer is a resounding, “yes”, it’s time to rethink how realistic your assessment is. Employees might be trained to follow standard policy, but sooner or later natural human behaviour inevitably surfaces.
By Richard Agnew, EMEA North VP at Code42.
With the onset of digital transformation and an increase in flexible working across the globe, cloud collaboration platforms (CCPs) have become a sophisticated, yet common tool for the majority of workplaces. After all, the advantages of standardising on Google Drive, Microsoft OneDrive, Box or Dropbox include higher productivity, better workplace collaboration and improved customer services, to name but a few. In an increasingly connected age, the adoption of CCPs isn’t showing any signs of slowing. According to 451 Research, 60 percent of enterprises plan to shift IT off-premise by 2019, and Research and Markets revealed that the enterprise collaboration market size is estimated to grow from $26.7 billion in 2016 to $49.5 billion by 2021.
In parallel to digitalisation, cyber security is a growing concern among UK businesses — especially since the General Data Protection Regulation (GDPR) has gone into effect. Ironically, the source of companies’ concerns over cyber security is often the very platform that is designed to be a safe harbour for organisations’ data assets and intellectual property: CCPs. Since these platforms are often used to share information between employees and departments, organisations face risks of data leaks. Therefore, companies should monitor the effectiveness of their data security policies—and have a backup plan for the inevitability that humans won’t always follow policies.
Employee habits
Needless to say, employees are the backbone of any organisation. But their human unpredictability also makes them one of an organisation’s biggest vulnerabilities. For example, employees tend to neglect instructions that are contrary to established working habits. This type of negligence often leads to data leaks.
To fully comprehend real-life workforce habits, Code42, in partnership with enterprise customers of more than 500 employees, examined data storage behaviour across a sample size of 1,200 laptops. The study found that only 23 percent of the data employees generate and store on laptops and desktops is saved in tools like Google Drive, Microsoft OneDrive, Box or Dropbox. This leaves 77 percent of data living exclusively on employee endpoints. When measured by file count, a mere 1 percent of all files stored on endpoints is backed up in CCPs.
Code42’s study further broke down employee work habits by common user types:
● Travelers: The 30 percent of employees who store less than 25 percent of their files in CCPs.
● Innovators: The 40 percent of employees who keep 25 to 50 percent of their files in CCPs.
● Collaborators: The 20 percent of employees who have 50 to 75 percent of their files in CCPs.
● Adopters: The 10 percent of employees who have more than 75 percent of their files in CCPs.
Relying on employees to use CCPs as a security strategy clearly isn’t enough. To guarantee the security of any company’s IP, IT and business leaders must have visibility to endpoint devices, such as laptops and computers. Only with this visibility can organisations effectively protect their data and recover from a data leak event.
In addition, IT leaders mustn’t fall into the trap of believing that nothing of value lives on employee laptops and is worth protecting. The study found that 36 percent of the files employees have on their laptops and desktops include IP, such as programming files, images, spreadsheets, zip files, presentations and audio and video files.
Conclusion: Mind your CCPs and Qs
Remote workplaces are here to stay, thanks in no large part to CCPs. While CCPs offer some data security for the employees who use them, IT departments must find solutions that allow for the realities of human behaviour if they wish to mitigate potential data leaks. IT departments must have visibility over all of their organisations’ data—including what lives on employee endpoints—if they are to have any chance of effectively navigating today's problematic cyber landscape.
We used to think that communication was the ultimate goal for any information technology. First with phones, then the Internet and social media, and finally unified communications at the turn of this century – every advance in communication was heralded as a huge leap forward in human interaction and business efficiency.
By Rufus Grig, CTO at Maintel.
But just because tech makes it easy to talk to each other, it doesn’t mean that we’re any smarter or more efficient. If social media has shown us anything, it is that instant communication does not necessarily result in greater insight or better results – in fact, it is more often full of “sound and fury, signifying nothing”.
We have seen the benefit from remote meeting tools such videoconferencing – saving travel time and speeding up decisions – without really changing the business model; all we were doing was having traditional meetings, albeit on screens rather than face-to-face. Useful, but not really a transformational use of technology.
Strangely enough, it is our newest employee cohort, the millennials, who have shown us the way towards meaningful communication. A generation for whom actually making calls on their mobile phones is a pretty unusual activity are championing messaging and sharing technologies, leaving us a step closer to true collaboration among teams. These tools create a powerful argument that we should set our sights beyond the simple replacement of traditional meetings, and instead start talking about “unified collaboration”.
The limits of traditional communications
Having reigned supreme as the only real-time communications tool in town for more than a century, the use of telephones to actually talk to people has been in real-term decline for some years – in both fixed and mobile devices. Etiquette is changing around the use of the phone. In many organisations, it is now the done thing to send somebody an instant message or email before calling – “are you free for a quick call on this?” We are increasingly putting calls in our diaries and shared calendars whereas a couple of years ago we would have simply picked up the phone. One Gartner analyst reported recently that they expect the number of “unplanned” calls – i.e. a totally unannounced call – to drop from just 50% of business calls today to a mere 10% of calls by 2021. While that may be a little aggressive, there is no doubt that the pattern is shifting.
Unified communications brought great advances over traditional methods of communication, including some collaboration functions, but has suffered from the huge diversity of different communications and messaging platforms in use among businesses. In an era where every business, every generation and almost every worker has their preferred messaging platform, the challenge for technology providers – and IT departments – was how to bring these together into a true system of unified collaboration.
The Advance of the Application
Technology has revolutionised every form of communication. Spiceworks found that 51 per cent of respondents believe that collaborative chat apps were critical to the success of their organisation, while a Frost and Sullivan report found 51 per cent of employees use mandated apps for their business on their phones, an increase from 27 per cent of employees in 2011.
There is no question that Slack, Skype for Business and Spark are changing the way in which we work. They facilitate an active flow of information between colleagues that one way communication simply cannot match. Their adoption represents a world far removed from a workplace reliant on voice, email and fax previously envisaged.
The Data Protection Challenge
Obviously as 21st century technology, this is delivered from the cloud. This has the obvious advantages of being easy to deploy, simple to access, enabling collaboration across organisational and national boundaries and can allow business to pilot and experiment with solutions – even to use them on a workgroup by workgroup basis.
However, that does throw up a couple of interesting data protection issues.
Firstly, if you are sharing documents and messaging you are transferring data, so where is that data being stored? The issue of data sovereignty can be very important in some markets and with some forms of data, so this needs to be understood and factored in when selecting products.
Secondly, who are you sharing data with? And what data are you sharing? All organisations have confidential information and much of it will be personal data and subject to data protection regulation – including the fast-looming GDPR.
With the increasing availability of freemium services, this is further compounded by some people taking a BYO approach to collaboration services. CIOs are having to get to grips with this new world – providing services that are appropriate, and setting out usage guidelines for their employees using third party tools, or tools that are licenced to partner organisations.
Moving to Unified Collaboration
Collaboration is here to stay – its benefits greatly outweigh any challenges it creates. But we are in a brave new world. The traditional Unified Communications vendors are taking steps into this world of collaboration – and may have excellent offers. They enable full control of data, permissions, security and audit control and so offer the safest way to deploy collaboration tools. However, these monolithic vendors also can’t keep up with the pace of change, and there will always be times when the agile start-up comes up with the tool that can really help an organisation. Thus, the ultimate panacea of the single unified collaboration system to solve every problem may forever elude us.
Of what we can be absolutely sure, however, is that the blurring between communications and collaboration and between meetings and work, is here to stay.
More and more businesses today are looking to extract insight from the data they have access to in as near real-time as possible. They are looking to rapidly gain the insight and intelligence they need to drive optimum customer experiences; faster time to value and a competitive edge.
By Jeff Fried, Director, Product Management - Data Platforms, InterSystems.
These are elusive goals for many businesses though. Companies need to be able to blend analytical queries and transactional data. Otherwise, they are likely to be basing decisions on data that is anywhere from ten minutes to two hours or even days old, making it all but impossible to capitalise on many real-time and near real-time business opportunities.
It is hardly surprising, then, that, according to Choosing a DBMS to Address the Challenges of the Third Platform, May 2017, an IDC InfoBrief sponsored by InterSystems, 76 per cent of respondents reported that the inability to analyse current data inhibits their ability to take advantage of business opportunities.1
Bringing Together the Transactional and the Analytic
In assessing how a combination of transactional and analytic data processing can help here, it makes sense to first consider their respective roles within the enterprise. One of their distinguishing features today is simply that they are separate. Indeed, transactions and analytics typically form two distinct data processing arms within the enterprise.
Transactions often involve the processing of records data in relation to regular operations conducted across the entire business and are designed for write, not query, speed. Analytics process data from multiple transactional databases and are designed for query speed to provide organisations with insights based on specific questions.
Data often needs to move from transactional systems to analytics, increasing complexity and latency that slows the business down and can lead to missed opportunities. Transactional data processing is often limited in its ability to quickly perform analytic queries, while analytics data processing is often too slow to deliver valuable real-time insights. A transactional approach drives business operations. Analytics make the data actionable and bring out its value, empowering organisations to identify connections across multiple transactional databases.
According to the IDC study, 86.5 percent of organisations use ETL to move at least 25 percent of all enterprise data between transactional and analytical systems. And nearly two-thirds (63.9 percent) of data moved via ETL is at least five days old by the time it reaches an analytics database.2 This is a critical obstacle for most organisations that want to deliver the right customer experience at the right moment.
Businesses do, however, also face other hurdles on top of this. Typically, they will need to support more data types (structured, unstructured, etc.), larger data sets and an accelerated path from analysis to action introduced by mobile users, IoT/sensor data, and fickle / constantly emerging trends.
This situation is not helped either by the disparate range of data management tools they typically use today. Companies often utilise several different database systems, for example, which means the data is saved and stored in a wide range of different places and formats. Each database is unique to the types of data and types of workload it specifically manages.
More than 60 percent of respondents to the recent IDC survey reported having more than five analytical databases, and more than 30 percent have more than 10.3 The majority of respondents have more than five production transactional databases, while 25 percent had more than 10.4
Companies now have the challenge of harnessing that data and determining how to extract value from it by applying it to business operations – making sense of all the data by tying all the sources to an individual customer, patient, citizen, investor, etc.
Finding a Solution: What is a Modern Data Platform?
By combining analytic and transactional data processing, including a range of data types in support of digital transformation, a modern data platform lies at the heart of a data-driven business. A data management platform is a centralised computing system for collecting, integrating, managing and analysing large sets of structured and unstructured data from disparate sources at massive scale (distributed as well as single server) and can support multiple use case scenarios and workloads (transaction processing and analytics) with native data and application interoperability.
There are three central pillars to a modern data platform.
Why do companies need a consolidated data platform? Because disparate systems create a disconnect between insight and action, resulting in a delay in the feedback loop that drives the ultimate customer experience. A consolidated data platform helps companies achieve their core (IT-related) business objectives, such as simplifying architecture, reducing cost, speeding innovation and streamlining operations.
Connecting Insight and Action
Managing multiple databases is complex, expensive and introduces latency issues. As data itself grows more complex, deploying a unique database and data integration system for each business need creates unnecessary complexity. It means tactical decisions cannot be supported as long as data is segregated into transactional and analytical databases. Most users need a broad variety of data type support that goes well beyond what native relational database management systems (RDBMS) provide. Lastly, the rising costs of database management are cost prohibitive to many organisations. The maintenance of many databases leads to excessive cost and complexity in the data centre.
Delivering Ultimate Data-driven Experiences
How do you define the ultimate experience for your customers, partners and stakeholders? What data do you need to ensure all the information can be accessed for both guiding a decision and supporting and executing a decision (transaction)?
To answer these questions, you must first identify your organisation’s data infrastructure needs. This includes understanding where your data resides, how often and by what means it is accessed, and how it is being analysed.
Companies no longer need to choose between having real-time access to data and their preferred method of analysing data. They can access data how and when they want and have the ability to make the data actionable in real-time. Modern data platforms help make your data infrastructure work for your business.
Timely access to data can have a major impact on company operations and customer experience. As more and more data is generated by companies and their customers, it is important to be able to easily access and analyse this information and use it to inform business decisions. A modern data platform simultaneously supports analytical and transactional decisions and streamlines data infrastructure costs, driving more intelligent insights across the entire organisation.
Within organisations it seems as though the DevOps culture receives the majority of attention and the handoff from development to operations, which is arguably the most important cog in the machine, has been largely ignored. To ensure both sides receive the correct amount of attention, organisations should rely on the right release management to bridge the gap between the two. Along with the correct approach, the tools, teams and processes are at the core of what needs to be strengthened to help build it.
By Bob Davis, CMO at Plutora.
Development and Operations: what’s the difference?
Development and Operations teams have been divided for decades. This chasm can be for many different reasons, such as the varying focuses and missions of each one. The development team’s mission is to create value by producing a change in response to both customer and market requirements. While on the other hand, the operation team’s focus is creating value by keeping applications and services running reliably so that customers can depend on them.
Unfortunately, with traditional software development and delivery methods, a higher change frequency often directly correlates to a higher risk to application reliability.
Both developer and operations teams often have a wide variation in both the process and approach to software releases. Development teams often work in a proactive manner and are trying to be more agile in this process, moving to a more task-oriented process with automation in the testing and building of applications. In comparison, operations groups are “reactive.” They are focused on service management and incident response. Teams supporting operations tend to use organisational models closely aligned with IT Service Management (ITSM) and Information Technology Infrastructure Library (ITIL). A site operations team at a high profile website will use the term “customer” to refer to internal end users creating incidents and change requests in a system like BMC Remedy or ServiceNow.
Visibility into the performance of each build is not a main priority. Developers create software systems, and they often have an entirely different model of business from the operations teams. Some development groups working on slow-moving, back-office systems may be very amenable to the service management model of software delivery.
Other groups, who are focused on fast-moving, highly technical systems, are not often aligned with the ITSM and ITIL models. When it comes to tools, developers are more at home managing development processes with issue tracking tools like Atlassian’s JIRA and configuration management tools like Puppet or Chef.
In short, developers focus their efforts on ‘pushing out code.’ Changes and enhancements to production systems and attendant failures in providing service are seen as part of this rush to get the latest and greatest code out to consumers. Developers thus tend to be agile, proactive, and reliant on self-service for accomplishing their goals.
Aiming for balance between business needs and accountability makes operations teams more cost conscious, process-driven, reactive, and wary of sudden and rapid change.
Why DevOps needs effective release management
For work to become less-siloed, organisations can incrementally embrace DevOps practices around release management to improve the quality and frequency of its releases. Even better, as these practices are gradually adopted, the resulting improvements will help build the business case for wider change across the organisation.
Release management is the basis for a smooth transition of applications from code completion to deployment into live production environments. In other words, release management is central to DevOps. Each release has various touch points, software tools, players and workflow, as well as the complexity of modern-day application release to production—a fragile process that needs to be repeatable, error-free, and agile. That’s where release management comes in and acts as a glue between Dev and Ops. With organisations adopting multiple methodologies like waterfall and agile, release management ceases to be an option and becomes critical.
Effective release management directs the flow of change throughout various pre-production environments such as planning, designing and testing, to eventually culminate in a successful deployment into the production IT environment in the least disruptive manner.
Successful release management enables a release to smoothly transition from DevOps to the more manageable ITSM approach that is expected by larger companies. Release management is vital, as is creating a culture of good practice within the organisation. The DevOps gap can be crossed without issue when good release management tools are implemented to help organisations balance development and operations.
In today’s turbulent retail market, the ability to accurately predict and forecast has never been more valuable, allowing retailers to staff and stock efficiently and effectively. However, with traditional forecasting methods often displaying significant error rates many retailers are being left with too few staff and insufficient stock in place, to meet spikes in demand. This inevitably results in unfulfilled sales opportunities, increased costs, and abandoned shopping baskets.
Too many retailers have previously based their planning efforts on inaccurate forecast data. That’s why we are seeing a growing trend for retailers to start looking at artificial intelligence (AI) and machine learning in order to improve their ability to accurately forecast demand. With computing and data processing power ramping up, the smarter retailers are already scoping out the benefits that more accurate forecasting could have on their sales and profitability.
Key to the success of any forecasting system is the data fed into it. At a base level, this will be historical data such as footfall coming into the store on a quarter-hourly, half-hourly, hourly basis as a baseline indicator of a demand level.
The latest forecasting solutions in this area look to categorise this kind of activity against the day of the year so businesses can start to look for patterns by using machine learning. Is it a weekday, weekend, bank holiday, a Friday or a Saturday on pay week, for example? Time slots can also be overlaid, as can historical weather patterns
All these existing metrics will be used to predict likely future demand but future events can also be brought in to influence retail plans. These could be long-range weather forecasts, or future events, both on a national or global scale like the World Cup, or on a local level, like annual music festival. The model, supplemented by consultancy from expert data scientists, can help retailers expect what would otherwise be the unexpected, and plan much better for likely future demand.
The benefit could be that the retailer can save money by reducing staff levels when demand is likely to be low, but still more importantly capitalise on opportunities to drive sales by bringing in more staff where demand is predicted to be high.
Benefits extend beyond the shop floor into logistics and the supply chain of course. On the product side, forecasting looks at different categories, highlighting how demand for each may change, how it can be seasonal, for example, or what product lines are working well in that category.
All this information should be rapidly taken up the supply chain to allow retailers to quickly engage with their suppliers, manufacturers, or in the case of groceries, even the farmers. After all, the more awareness retailers can offer the supply chain as to what demand is likely to be, the more likely they are to have product ready for those retailers and increase profits all-round.
Labour planning in warehouses and distribution centres is also linked in this chain and is another factor that is considered. Planning and forecasting solutions, coupled with latest workforce management technology, are enabling smarter planning of labour and stock levels in distribution centres.
Machine learning will also take warehouse management solutions to the next level. Currently, they are set up to ensure that the approach is not to pick solutions alphabetically, or by product type. However, going forwards these systems will be expected to take this to the next level, looking for more complex patterns in how items are ordered together and further demands. Instead the system will store them depending on how often they are needed or whether they are likely to be picked together. By putting these items together within the warehouse, the end goal is to speed the process by reducing the average distance each picker needs to walk to go and pick each item.
The technology used is key here but so too is consultancy. Every retailer and supply chain business is different and has specific needs and requirements. They all need expert advice about which systems are likely to work best for them and which approach is most suitable both on the shop floor, and on the warehouse and logistics side.
Bright Future
Times are tough in the retail market today. Retailers need to look at ways they can achieve an edge over the competition. The latest artificial intelligence and machine learning offers them that opportunity – and with AI and robotic process automation (RPA) completing repetitive and complex tasks, employees are enabled to spend more time serving and responding to customers to enhance crucial customer-brand relationships.
The most forward-thinking are investing in this kind of technology to drive operational efficiencies, reduce costs and maximise sales opportunities – all of which will positively impact the bottom line.
Accurate demand forecasting is key to supporting the retail industry, keeping customers happy with products and stock and ensuring staff morale remains high. The retailers that capitalise on this the fastest and start implementing the technology are more likely to find themselves one step ahead of the game.
Artificial intelligence (AI) and the Internet of Things (IoT) are two of tech’s most popular buzzwords. Put them together, and you have a potent combination for handling the mind-boggling amounts of data flooding enterprises from all directions. By Stephan Fabel, Director of Product Management at Canonical.
Worldwide spending on IoT is expected to reach $1.4 trillion by 2021, according to IDC, as organisations invest in IoT-enabling hardware, software, services and connectivity. IoT is seen as the future of just about everything, from smart-city advances like traffic congestion relief and intelligent street lighting, to better energy management, to industrial robotics and asset tracking, to monitoring of medical equipment and patient condition (not to mention the array of home consumer applications).
All of these devices and sensors – an oft-quoted Gartner prediction places the number of connected things at 20.4 billion by 2020 -- produce nearly unimaginable volumes of data. Companies, governments and other organisations need to be able to collect, parse and analyse all that data to detect patterns that can drive business decisions.
The more data sources you have, of course, the tougher it is to derive meaningful insights from them. The only possible path to a coherent picture is automated intelligence using machines – i.e. AI.
Simple, right? Problem solved? Far from it.
When we talk about managing IoT devices on the edge, we really mean “control planes” – not the devices themselves but the things that control the things. An example of this in your home might be the Phillips Hue Bridge unit that directs your smart light bulbs from your phone or tablet. In a large enterprise setting, it means the cloud infrastructure elements that manage the compute, networking and storage associated with the devices or sensors. There can be thousands of them.
As IoT pieces proliferate, it becomes exceedingly difficult, inefficient and expensive to manage them remotely from a centralised cloud or data center without also having a system to gather and analyse the data closer to the source. IoT devices need to be able to collect and process huge amounts of data in near real time with minimal latency. And companies can rack up huge sums in connectivity costs if they send entire raw streams of sensor data back to the main data center instead of only the most useful information.
Thus, a fundamental priority for enterprises today must be developing AI and machine learning (ML) frameworks that can be deployed at the edge – managing and sifting through data there and then returning the most relevant data to the core, where it can be combined with other information across the organisation for global insights.
And there is no way to do this without some form of AI. The complexity is simply too high.
The tightening bond between AI and IoT is driven by a sea change in corporate computing: in the past, it was all about how well can machines compute something and, even in the recent cloud era, it is about how well can clouds deal with the fact that that compute may be elastic. Now, it’s all about the data: where does it make sense to have data stored, how should it be moved around, and what actions should be taken based on what it tells us?
All of this is causing a re-think of the architectures and tools required to make AI models easily accessible and reusable for the intelligent edge.
As enterprises extend more and more to the edge, containers are supplanting virtual machines as the go-to technology. The latter are just two heavyweight.
Kubernetes has emerged as the clear winner for container orchestration, with AI as one of its fastest-growing use cases, because it provides a great deal of operational efficiency and agility for this edge-to-core dance.
Kubeflow, an ML stack built for Kubernetes, reduces the challenges in building production-ready AI systems, such as manual coding to combine various components from different vendors and hand-rolled solutions and difficulty in moving ML models around without major re-architecture.
Acumos is a new Linux Foundation project to foster a federated platform for managing AI and ML applications and sharing AI models. With its visual workflow to design AI and ML applications, as well as a marketplace for freely sharing AI solutions and data models, Acumos holds strong promise for enabling the re-launch of AI applications as containers.
Thanks to these AI-supporting technologies, organisations will be able to focus on data science in an increasingly IoT-dependent enterprise without hitting infrastructure walls. It’s fair to say that success at this endeavor will become a key competitive differentiator for companies across industries.
AI and IoT is a crucial combination that will shape corporate data strategies for years to come.
Availability, efficiency, growing customer communities of interest and operational excellence are the drivers for Interxion and their new DUB3 data centre.
Interxion is a leading principal supplier of data-centre, colocation and connectivity services to some of the world's leading businesses. Interxion is serving a wide range of customers, operating 49 data centres in 13 cities throughout Europe, three of which are located in Ireland.
Located at Grange Castle in west Dublin, its DUB3 data centre, which was opened in December 2016, is a 2,400sq meter single-storey, fully concurrent maintainable facility with various fault-tolerant infrastructure features.
Interxion have been in operation in Ireland since 2001 and have continued to upscale. As client demand continued to grow for its colocation services, it announced the new facility in February 2016. DUB3 provides direct access to a connected community, allowing clients to interconnect with other organisations to cut costs, improve the quality of their service and create value.
To ensure maximum energy efficiency, DUB3 was designed with a focus on energy-saving, modular architecture, incorporating cooling, as well as, maximum efficiency components. Interxion chose a greenfield site for its new data centre, which would run on 100% renewable energy. Following the completion of a rigorous process to assess environmental impact and security risks, construction started in mid-February 2016.
As a long-term partner working with Interxion on many global data centre projects, Schneider Electric contributed various components of physical infrastructure from its power, cooling and software solutions portfolio, ensuring rapid construction, delivery, and seamless integration between all critical components throughout DUB3’s design and deployment stages.
Ireland holds a prominent market position
“The Irish data centre market is unique,” explains Tanya Duncan, MD of Interxion Ireland. “We are seen as a gateway country for large international companies who need a local presence for their European operations. As such, the local market is very large for the size of the country and the service providers are very knowledgeable in the way that they build and operate their data centres.”
The key for Interxion to differentiate its service-offering from competitors is through operational excellence according to Duncan. “We have to be able to guarantee service delivery to the highest standards. We have to remain flexible, with the capacity to scale up quickly as our customer's requirements expand.We cannot afford to be a constraint on their growth plans’ she says. “Dublin is the interconnection hub and we are committed to helping our customers connect to their partners, suppliers and end users. Innovative businesses need a connectivity provider who have the knowledge to help them switch and scale as their business needs evolve.”
The DUB3 facility provides premium data centre services to Interxion's clients who range from local Irish companies to larger international corporations and the Cloud platform providers, across a wide range of business sectors. According to Karl Mulhall, Operations Manager of Interxion Ireland, the new facility is designed to support a total IT power load of 5MW, when fully populated. This, he says, is driven by the demands of their customers.
“Cloud providers are asking for higher densities at rack level,” he said. “Our first data centre in Ireland operated at 1kW/m2, our second at 1.5kW/m2, and more recently, DUB3 now operates at 2kW/m2. This allows us to put racks rated at between 10 and 15kW throughout the raised floor area.”
Uninterruptible, resilient power
Schneider Electric has traditionally worked with Interxion as one of their leading-suppliers of critical infrastructure components. This is consistent in many of the companies operations across Europe, and helps to guarantee efficient and reliable operation of their data centres.
Maintaining full service to the customer operation in the event of an outage is a vital requirement, therefore the choice and deployment of uninterruptible power supply (UPS) systems is of paramount importance.
At DUB3, Interxion are utilising Schneider Electric modular UPS systems to provide continuous power to the IT racks within the new data centre. These are 1.6MW units arranged in a hexa-load design, which was developed by Interxion's in-house engineering team and has in recent years been deployed across multiple sites.
The hexa-load design allows four modular UPS systems to always offer 2N power redundancy to an entire rack by sharing the load in such a way that a failure of any one system causes the load to be shared by the other three while each are operating at 75% capacity.
“Interxion’s ability to scale quickly and the operational excellence we provide are key selling points for our customers,” says Mulhall. DUB3 is designed with failsafe tolerant infrastructure at critical areas to ensure we can support our customers stringent service level agreements.
Our primary concerns have, and always will be the needs of our customers and to maintain our reputation as a reliable colocation service partner. The loss of reputation that would follow from any serious downtime would be far worse than a financial penalty.”
The modular nature of the Schneider Electric UPS products is also an advantage, according to Mulhall. “Customers are becoming more demanding with regard to speed of deployment,” he says. “We have to roll out new capacity and have it up and running within very tight timeframes. Modular systems like the Symmetra UPS allow us to grow in step with our customers' requirements.”
Flexibility of response also influences the choice of cooling architecture at DUB3. The facility has a raised floor with a cold aisle containment configuration because it provides greater flexibility when populating the IT halls, in accordance to the changing requirements of the company’s diverse customers.
Energy efficiency is a cool focus
When the outside ambient temperature exceeds all options for free cooling at DUB3, adiabatic coolers work in conjunction with external chillers. The cooling infrastructure, provided by Schneider Electric, includes computer room air conditioners (CRACs), containment systems and Data Centre Infrastructure Management (DCIM) software.
Cooling efficiency is a major challenge for all data centres and each new Interxion facility in Dublin has been designed to be progressively more efficient than the last. The Power Usage Effectiveness (PUE) metric, namely the ratio of the overall electrical energy usage of a facility divided by the energy utilised by IT equipment, has decreased with the evolution of technologies available in the market today, and the design of each facility.
Security is a critical consideration
In addition to delivering detailed customer reporting on the availability of power, climate control and various other aspects of data centre visibility required by clients, the integrated DCIM solution provides an exceptionally high level of security.
One of DUB3’s key features is that its StruxureWare for Data Centers™ DCIM system has to protect against cyber-attack and external threats, something that the company has engaged in heavily with Schneider Electric.
“DCIM software these days is inherently complex.” continued Karl Mulhall, Operations Manager of Interxion Ireland. “Over the last eight years we’ve worked closely with Schneider Electric to create a strong, secure and user-friendly system.
100% Renewable Energy
“Renewable energy is becoming more and more important to our customers.” said Tanya Duncan, MD Interxion Ireland. “Energy is such a big part of the operating expense, we have to ensure we’re always running as efficiently as possible, and therefore we have contracts in place with utility providers for energy from 100% renewable sources.”
“As a company, Interxion also need to deploy the most energy efficient components in our data centres,” she continued. “Partnering with Schneider Electric enables us to ensure we’re at the forefront of energy efficient technology, whether that’s in our CRAC units, our UPS systems or our cooling solutions, everything that minimizes power usage is of benefit to us all.”