Welcome to insideBIGDATA’s “Heard on the Street” round-up column! In this new regular feature, we highlight thought-leadership commentaries from members of the big data ecosystem. Each edition covers the trends of the day with compelling perspectives that can provide important insights to give you a competitive advantage in the marketplace. We invite submissions with a focus on our favored technology topics areas: big data, data science, machine learning, AI and deep learning. Enjoy!
Enhancing the developer lifecycle in 2022 means simplifying it with low-code/no-code. Commentary by Eric Newcomer, CTO, WSO2
Right now, we are witnessing demand for highly skilled developers increasing at a time where the developer lifecycle is simultaneously changing to include expectations for developers to directly implement security into their applications. Businesses look to meet the market halfway by expending large amounts of resources to hire skilled developers, but as more developers add to the complexity of application development with direct approaches, it drives some projects to be non-compliant with security policy, which is a big problem. Simplifying the approach with low-code/no-code solutions is the answer to the problems businesses face as they look to align productivity with demand. Low-code/no-code practices that include automated deployment of code into production and pre-built security workflows like SDKs and authentication checks make business operations much easier because they free up developers to spend their time and expertise on core areas of the business. However, organizations must be careful to get it right, and should also implement additional requirements on the automated code pipeline for security checks.
AWS Outage. Commentary by Ev Kontsevoy, CEO of Teleport
Why do outages take so long to resolve? Teleport’s annual State of Infrastructure Access & Security Report might shed some light: folks can’t access the problem. In fact, the survey found that 61% of organizations have experienced an instance where an expert engineer has been unable to contribute to the resolution of an issue due to infrastructure access challenges.
AWS Outage. Commentary by Jared Cheney, Regional Vice President at SoftwareONE, an AWS Advanced Migration Consulting Partner
As companies continue to drive their digital transformations and modernize their applications, the impact and exposure that situations such as the ongoing AWS regional outages will continue to be felt across our personal and business lives. However, these scenarios continue to emphasize the importance of a cloud shared responsibility model with bulletproof architecture that leverages standard business continuity and disaster recovery best practices – such as Multi-AZ, Multi-Region & Multi-Cloud motions. Whatever the customer Hyper-scaler(s) of choice, My recommendation is to maintain strategies that fulfill business needs while ensuring the correct governance processes are in place across every service-level-agreement.
AWS Outage. Commentary by Dan Johnson, Director Global Compliance and Continuity at Ensono
Organizations rely heavily on cloud providers for easy application management and flexibility across their systems, so when inevitable outages occur, it can be detrimental to hundreds and thousands of users. To prepare for these outages, it’s necessary to have a disaster recovery plan in place to run alongside your systems or take over when a provider is down. To have an efficient plan, organizations must perform frequent assessments and analyses as well as have different prevention, preparedness, response, and recovery measures. These disaster plans have the capability to minimize the impact on business performance by moving workloads and data over to the recovery site — ultimately managing all data until systems are restored. My recommendation, which has become widely popular over the last few years, is to also implement a multi-cloud strategy, which allows for disaster recovery plans to work across providers when one is experiencing issues or a complete outage. While many organizations express concern around cost and security in multi-cloud plans, the benefits of these strategies are best for workloads in the long run.
AWS Outage. Commentary by Jason Barr, Senior Director of Innovation at Core BTS
While implementing preventative outage tools is extremely important, they cannot prevent all incidents from occurring – a major lesson learned from the recent AWS outage. The best preventative measure for organizations to combat extensive damage from an outage is through a highly-detailed disaster recovery plan.These plans are developed prior to an incident and help business leaders to mitigate the fallout of an outage by offering a variety of back-up plans that lessen the blow of an outage. Organizations should have a fully developed cloud failover system in place, so downtime can be minimized and significant damage can be avoided. By employing these practices, the impacts of outages can be mitigated. The best way business leaders can ensure that their systems are prepared and solid in the event of an outage is with a disaster recovery plan. Through continuous care and monitoring of IT systems and putting a disaster recovery plan in place, organizations can better set themselves up for a faster, more efficient outage recovery process.
A “Lift and Shift” Approach to Cloud Migration Isn’t Feasible. Comentary by Matt Maccaux, Global Field CTO, Ezmeral Software, Hewlett Packard Enterprise
Organizations embarking on a cloud migration often hope to take a “lift and shift” approach to their current applications, but unfortunately, that is not the case for most application leaders. While the cloud promises speed, agility and cost savings, recent analysis by Gartner showed that only 15% of existing application portfolios were in good enough architectural and functional shape to be directly rehosted, while a further 22% needed to be revised to run on cloud platforms. Given that most organizations do not hold the technical criteria required to migrate to the cloud successfully, decision makers are realizing that finding the right mix of hybrid cloud and workload placement is the way of the future. The flexibility and control that customers have come to expect from any modern service in today’s hybrid environment comes from finding a balance between cloud and on-prem. Reevaluating how organizations provision their workloads is challenging, and there are several considerations companies should keep in mind when determining the right venue for each: cost, data governance, data security, and especially data gravity. However, the measurable business impact will derive from an organization’s willingness to invest in partners who can help them optimize their hybrid cloud experience. Enterprises of tomorrow will be looking at more edge-to-cloud platforms that allow their businesses to have the best of both worlds: security and cost management of on-prem, plus the significant advantages of the cloud.
Move to A Multi-Cloud Strategy to Mitigate Devastating Outages. Commentary by David Drai, Co-Founder and CEO of Anodot
Amazon Web Services (AWS) had three outages in December 2021 alone, causing huge multi-million-dollar revenue losses during the critical holiday season, and more outages are expected to occur in 2022. These outages especially hurt retailers and other organizations that rely on a single cloud platform provider because they cannot switch quickly between different clouds when a downtime occurs, leaving them non-operational for up to 8-12 hours. In this uncertain era, organizations must adopt multi-cloud strategies combined with vendor-agnostic AI-based business monitoring to provide more resilient, productive Web experiences and mitigate the expected outages that will occur later this year. Moving to a multi-cloud strategy is not an arduous process; it only takes four simple steps. First, organizations must adopt agnostic APIs and DNS protocols that will let them use one of the cloud services without accessing a unique API but instead use an API in a very short process seamlessly. Next, they should leverage multi-cloud cost management services to give them visibility around how and where they are spending their cloud resources, allowing them to forecast and plan different scenarios that yield greater cost efficiencies such as shifting resources to less expensive clouds. The third step involves the adoption of CDN services that provide full redundancy for the cloud while avoiding cloud stickiness – offloading cloud resources into a CDN service can be served from multiple clouds for data centers. Lastly, they should invest in AI-based business monitoring to detect outages several hours before they occur, allowing them to move to another cloud without experiencing any downtime. In this era of ongoing dramatic outages, organizations that move to a multi-cloud strategy buttressed by AI-based business monitoring will prevent revenues losses, degraded customer experience and irreparable damage to brand reputation and consumer trust.
How AI/ML at the edge will power 5G and IoT in 2022. Commentary by Kaladhar Voruganti, Senior Fellow, CTO Office at Equinix
The ubiquity of 5G and IoT in technologies like autonomous vehicles, smart home and wearable devices is fueling an explosion of innovation at the edge. In the coming year, we’ll begin to see two trends emerge: the rise of AI marketplaces for data sharing and federated AI that empowers data processing directly at the edge. Organizations are increasingly looking to build AI and machine learning models to solve the complex problems that come with this growth. However, in many cases organizations need to access data from external sources like public clouds, data brokers, and IoT devices in order to improve the accuracy of their AI models. But there are challenges when it comes to data sharing. First, data providers typically resist sharing raw data because it could be used for unauthorized purposes by consumers. At the same time, data consumers are often concerned about the security, bias and quality of these data and models. To solve this, enterprises will leverage blockchain-enabled AI marketplaces to trade data and algorithms between multiple parties in a safe and privacy-preserving manner that maintains the chain of custody. AI marketplaces will provide secure enclaves at neutral locations where data and algorithms can be brought to create AI models. In this approach, the raw data being shared never leaves the neutral secure enclave. The massive increase in data generated at the edge will also demand a shift from processing data at a centralized location to processing data where it’s created at the edge. Both model training and inference will move from a central location to the edge. Federated AI will enable this next-generation of AI scalability, allowing AI processing to happen in a decentralized manner where algorithms get moved to the data located at the edge rather than sending large datasets to the algorithm at a centralized location, thereby reducing cost and latency, and also providing better privacy.
Embracing 2022 digital transformations through data training programs. Commentary by Dmitri Adler, Chief Solution Architect, Data Society
As companies continue their digital transformation journeys in 2022, it’s essential to take a retrospective look back and assess an organization’s triumphs and pitfalls while considering priorities for the new year. Some companies that spent the last two years pivoting to a hybrid working model by investing in platforms and technologies for their employees have struggled to yield tangible results with these new tools and are left unsure as to why. Once the tools and training platforms are put in place, measuring success and business impact becomes key. Training platforms can provide teams with the data skills and technological expertise to set them up for lasting success. Leaders must have quantifiable metrics to support these investments and measure efficacy in a way that ties training investment to actual business successes. Most executives understand the necessity to embrace technology and equip their employees with data literacy skills to enhance their efficiency and potential. Still, endeavors will fall flat without proper instruction, oversight of these initiatives, and directly related projects that give employees the opportunity to show off their skills and managers a way to demonstrate progress. At the outset of 2022, leaders should take a step back and determine measurable business objectives and baselines that can be achieved through reskilling, producing long-term benefits, and streamlining approaches to everyday projects.
Sign up for the free insideBIGDATA newsletter.
Join us on Twitter: @InsideBigData1 – https://twitter.com/InsideBigData1
document.getElementById( “ak_js” ).setAttribute( “value”, ( new Date() ).getTime() );
In this special guest feature, Carlos Melendez, COO, Wovenware, discusses best practices for “The Third Mile in AI Development” – the huge market subsector in data labeling companies, as they continue to come up with new ways to monetize this often-considered tedious aspect of AI development. The article addresses this trend and outlines how it is not really a commodity market, but can comprise different strategies for successful outcomes.
The following whitepaper download is a reprint of the recent interview with our friends over at PNY to discuss a variety of topics affecting data scientists conducting work on big data problem domains including how “Big Data” is becoming increasingly accessible with big clusters with disk-based databases, small clusters with in-memory data, single systems with in-CPU-memory data, and single systems with in-GPU-memory data. Answering our inquiries were: Bojan Tunguz, Senior System Software Engineer, NVIDIA and Carl Flygare, NVIDIA Quadro Product Marketing Manager, PNY.
Copyright © 2022