IBM Operational Decision Manager (ODM) is a comprehensive platform that enables businesses to automate and optimize their decision-making processes. ODM leverages the power of business rules and advanced analytics to drive intelligent decision-making across various applications and systems. One of the key components of IBM ODM is the Decision Runner, which allows users to test and execute decision services. In this blog post, we will explore the concept of custom data providers in IBM ODM Decision Runner and understand how they can enhance decision-making capabilities.
Understanding IBM ODM Decision Runner
Decision Runner is a part of IBM ODM that allows users to test and execute decision services in various scenarios. It provides a user-friendly interface for creating test cases, running tests, and analysing the results. Decision Runner can simulate real-time decision scenarios by providing inputs and evaluating the outcomes based on predefined rules and decision models.
Custom Data Providers in Decision Runner
Custom Data Providers in IBM ODM Decision Runner offer the flexibility to incorporate external data sources during testing and execution. While Decision Runner provides built-in data providers for basic data types, such as strings and numbers, custom data providers allow users to integrate complex and diverse data structures from external systems.
Benefits of Custom Data Providers
1. Integration with External Systems: Custom data providers enable seamless integration with external systems, such as databases, web services, or APIs, to fetch real-time data for decision-making. This integration ensures that decision services operate with up-to-date information, leading to more accurate and reliable results.
2. Realistic Testing Scenarios: By utilizing custom data providers, users can mimic real-world scenarios by populating decision inputs with data from live systems. This capability allows for more comprehensive testing, considering a wide range of potential inputs and their corresponding outcomes.
3. Dynamic Data Generation: Custom data providers can generate dynamic data during test execution, which is particularly useful when testing decision services that require constantly changing or time-sensitive inputs. This feature helps in validating the responsiveness and adaptability of the decision services.
4. Increased flexibility: Custom data providers can be used to provide data from a variety of sources, which gives you more flexibility in how you build your applications.
5. Improved performance: Custom data providers can be used to optimize the way that data is accessed, which can improve the performance of your applications.
Implementing Custom Data Providers
The implementation of custom data providers involves the following steps:
1. Define the Data Structure: Determine the structure and format of the external data source you want to integrate with Decision Runner. This could be a database schema, a web service response format, or any other data structure that fits your requirements.
2. Implement the Custom Data Provider: Create a custom data provider class (Java readers) that implements the necessary interfaces provided by IBM ODM. This class should define the logic for fetching data from the external system and mapping it to the decision inputs in Decision Runner.
3. Register the Data Provider: Once the custom data provider class is implemented, it needs to be registered with IBM ODM Decision Runner. This registration step ensures that the data provider is available for selection when configuring test cases.
4. Configure Test Cases: In Decision Runner, select the custom data provider for the desired input fields in the test case configuration. This allows Decision Runner to retrieve the data from the external system using the custom data provider during test execution.
Implementation Design
Conclusion
Custom data providers in IBM ODM Decision Runner enable businesses to unlock the full potential of their decision-making processes by integrating with external systems and incorporating diverse and complex data structures. The ability to leverage real-time data during testing and execution improves the accuracy and reliability of decision services. By following the steps outlined above, users can seamlessly integrate custom data providers into Decision Runner and achieve enhanced decision-making capabilities.
Harness the power of IBM ODM Decision Runner’s custom data providers and revolutionize your decision-making processes to drive better business outcomes.
About the Author
Pallavi brings over 7.7 years of IT industry experience, specializing in BRMS (Business Rules Management System) development across domains like Finance, Insurance, Healthcare, and Telecom. As an IBM ODM (Operational Decision Manager) lead, she holds expertise in BPM (Business Process Management) and has a deep understanding of ODM installation, Java Execution object models, and developing rule components such as Action Rules, Decision Tables, and Decision Trees. With proficiency in DVS testing, rule monitoring, and tracing, as well as experience in designing decision services and migrating projects, Pavalli brings valuable insights and knowledge in the field of IT and BRMS.
Discover how we helped a client overcome scalability limitations, streamline partner onboarding, and eliminate maintenance issues through our cutting-edge integration expertise.
Our Client
Our client is a prominent consumer packaged goods (CPG) company.
The Challenge – Modernization and Streamlining for Business Excellence
Our client embarked on a comprehensive three-year technology modernization roadmap to enhance process efficiency, improve business performance, and foster better collaboration. The primary goals were to modernize legacy SAP applications, integration processes, and infrastructure to ensure improved agility and real-time communications with vendors.
Existing landscape involved hundreds of vendors and relied on thousands of point-to-point interfaces using EDI (Electronic Data Interchange).
Complex network led to errors, cost inefficiencies, sluggish performance, and potential issues with invoicing, production, and shipments.
Specific challenges included scalability limitations, lack of reuse, extended partner onboarding, inflexibility, poor lifecycle management, and maintenance and support issues.
Action – Translating Vision into State-of-the-Art Architecture
The project employed an architecture-agile approach, establishing a reliable and scalable framework for future interface transformations. Prolifics’ relied on its expertise in implementing cutting-edge Canonical Design patterns to oversee design, implementation and testing phases.
Hani Alhaddad, Prolifics’ Director of Client Success, states:
“Canonical Design promotes seamless integration by establishing a standard and consistent representation of Data. It simplifies the integration process and interoperability leading to more efficient and scalable integration solutions.”
Prolifics gained a deep understanding of the client’s modernization vision and identified pain points through comprehensive landscape analysis.
A new architecture was proposed to streamline interface management and simplify the environment, leveraging reusable components, decoupled integration, and an API-first strategy.
Customization capabilities were implemented to meet specific customer requirements without the need for separate code versions.
Non-EDI interfaces, including app-to-app connections for communication between SAP and the payroll system, as well as internal application integration, will also be analyzed.
The successful translation of our client’s vision into a tangible framework positioned Prolifics as a trusted partner, leading to an expanded scope of work and an additional project for detailed design of EDI conversions and API interfaces.
Technology
What began as a project to modernize legacy SAP PI EDI-based integration quickly transformed into a landmark endeavor within our client’s project portfolio. The mission is to modernize their entire integration landscape using a component of SAP Business Transformation Platform (BTP) called CPI (Cloud Platform Integration), a cloud-based integration platform that allows organizations to connect different systems and applications. It supports both cloud-to-cloud and cloud-to-on-premises integrations, making it easier for organizations to connect and synchronize data between different applications and platforms.
More about Prolifics Integration Modernization
Integrations that connect, scale and flex; in months.
As your Integration Modernization partner, Prolifics offers comprehensive coverage of the full systems integration lifecycle, empowering your organization with enhanced efficiency, agility, and innovation. Our services span implementation, modernization, and management, backed by a proven methodology and a range of reusable IP accelerators.
About Prolifics
Prolifics is a digital engineering and consulting firm helping clients navigate and accelerate their digital transformation journeys. We deliver relevant outcomes using our systematic approach to rapid, enterprise-grade continuous innovation. We treat our digital deliverables like a customized product – using agile practices to deliver immediate and ongoing increases in value.
We provide consulting, engineering and managed services for all our practice areas – Data & AI, Integration & Applications, Business Automation, DevXOps, Test Automation, and Cybersecurity – at any point our clients need them.
Our client, a state’s Department of Safety and Homeland Security (DSHS), held a vital responsibility in documenting and processing crucial aspects to public safety such as criminal activity, traffic regulations, court appearances and the individuals responsible. To effectively carry out their role, they sought a reliable enterprise content management (ECM) system that could accurately capture and organize these records.
However, the outdated technology and expensive licensing costs of their existing system hindered their operations, with difficulties in customization, user experience, and document search capabilities.
The objective was to establish a centralized and user-friendly system, alleviating financial burdens and enhancing document management efficiency.
Action
We guided our client through a cost-effective transition to a modern solution, leveraging automation in a streamlined five-step migration process: discovery, extraction, loading, testing, and document rollout.
We conducted a comprehensive Intelligent Automation Discovery Workshop, led by Salim Hadim, our VP of Digital Automation and Cloud Solutions, to identify and understand the document population, develop a strategic roadmap for extraction, loading, and testing while preserving contextual information.
We seamlessly extracted documents without disrupting system operations, loaded the documents into the new system, conducted extensive functionality tests, and prioritized user acceptance for a smooth transition to production with the client’s desired functionalities
Result
Our client achieved a cohesive and risk-averse solution that alleviated financial burdens and enhanced operational efficiency by providing clear visibility into document storage. The implementation of our solution delivered several key benefits to our client:
Modern User-Interface (UI): Streamlined document search process with a user-friendly search application, enhancing the user experience and making it easier to find specific documents.
Improved Efficiency: Eliminated the need for copy-pasting, screen switching, and complex search mappings, leading to increased employee productivity and time savings.
Cost Savings: Eliminated annual license costs associated with the old system, resulting in a significant reduction in total cost of ownership (TCO) and improved financial efficiency and resource allocation.
Enhanced Risk Management: Eliminated the need for specialized skills to understand document storage, resulting in reduced risk of information loss, empowering better decision-making and minimizing potential challenges.
More on Intelligent Process Innovation
Hadim states, “Intelligent process innovation is all about improving efficiency in how work is done, from start to finish. Whether it’s a five-step or ten-step process, process innovation involves finding new ways to do the work or optimizing existing methods to save time, reduce costs, and enhance the overall experience. Innovation encompasses multiple dimensions of how work is approached.”
About Prolifics
Prolifics is a digital engineering and consulting firm helping clients navigate and accelerate their digital transformation journeys. We deliver relevant outcomes using our systematic approach to rapid, enterprise-grade continuous innovation. We treat our digital deliverables like a customized product – using agile practices to deliver immediate and ongoing increases in value.
We provide consulting, engineering and managed services for all our practice areas – Data & AI, Integration & Applications, Business Automation, DevXOps, Test Automation, and Cybersecurity – at any point our clients need them.
Author: Gregory Hodgkinson | Chief Technology Officer and Worldwide Head of Engineering, Prolifics
11 min. read
A new AI platform stepped out onto the stage at IBM’s technology showcase Think conference back in May: IBM watsonx, the “next-generation AI and data platform to scale and accelerate AI.” Well maybe it didn’t quite step out, it was more of a preview performance (more on that in a bit), but we got to hear some pretty compelling things about a product that IBM has been preparing for the big stage for some time — months of design, strategy, rehearsals, feedback, fine-tuning, because next month (July 2023) this unknown performer is making its world debut! So, what do we have to look forward to? Does it have an x-factor, an exceptional quality that sets it apart and contributes to its success and appeal? IBM certainly believes it does, so let’s dive in.
The preview
OK, so did we learn about this budding new superstar from the “preview performance” at Think?
Arvind Krishna introduces it as follows [1]: “We’re excited to announce the launch of watsonx, a groundbreaking data & AI platform that offers foundation models and generative AI technology. Clients will have access to a toolset, technology, infrastructure, and consulting expertise to build their own — or fine-tune and adapt available AI models — on their data and deploy them at scale in a more trustworthy and open environment to drive business success.”
Let’s put a pin in “generative AI” and “scale in a more trustworthy and open environment” as we’ll come back to those in a moment.
Importantly, we learned that watsonx is not a solo act. It is, in fact, a trio of products, each with their own distinctive role to play in the band [2].
watsonx.ai — Without doubt, the lead singer/lead guitarist in the band, is watsonx.ai. It struts its stuff with the self-belief of knowing that AI is no longer the emerging hit, it is mainstream! This is its time to shine! IBM describes it as “a next generation enterprise studio…for AI builders to train, test, tune, and deploy both traditional machine learning and new generative AI capabilities.” It’s tooling for both creating and remixing AI models to implement your business’ AI use cases. So, expect all the AutoML capabilities that streamline and automate model training, testing and deployment, allowing your small data science team to be super productive, and maybe even allowing your data science-savvy “business people” to step over the IT boundary line and produce some models of their own. But note the “generative AI” reference again. This is not last year’s AI platform before ChatGPT took the world by storm. This is an AI platform of its time, including foundational models [3] that power generative AI, and allowing them to be remixed as part of your own AI use cases. This is significant. Training AI models requires significant amounts of data. And creating these large and seemingly all-powerful models also requires a significant amount of raw processing power – not to mention storage. This is well beyond the ROI-reach of your average or even high-end business use cases. Foundational models are pre-trained, ready to “remix.” This is why everyone is so excited about the realistic potential for generative AI to positively transform mainstream business. IBM is very smart to cover this in their platform.
watsonx.data — This is the drummer, the bass guitarist, the beat powering the performance. As we all know, data is essential for AI, and this is where that data is brought together to create your AI virtuoso compositions. IBM describes watsonx.data as “a fit-for-purpose data store built on open lakehouse architecture that is optimized for governed data and AI workloads, supported by querying, governance, and open data formats to access and share data.” OK, so lakehouses [4] are not a new thing, and some may say that IBM is late to the party here. But maybe they got their timing just right. Let’s look at two big players in this space: Snowflake (a cloud data warehouse rather than a lakehouse), and Databricks (a true lakehouse). Both have seen fairly meteoric rises in popularity due to the fundamental need for a solid data platform that forms the core of any data-driven business, bringing in raw data and turning it into data assets that are able to create and power AI models. So, the need is certainly there. But Snowflake does not claim to provide an AI platform, it just focuses on the data. Databricks can’t claim to have invented the lakehouse concept, although it has certainly massively popularized it over the last few years. Databricks also focuses on “traditional machine learning” rather than foundational models. So possibly the world is ready for a new entrant in the lakehouse space.
watsonx.governance — Less of a performer role, more of a producer/mixer role. The role of watsonx.governance is to keep the band’s performance in key, on tempo, but also in tune with the target audience. When it comes to AI, audiences need things like ethical decision making, bias and fairness, transparency and explainability, robustness and safety, oversight and control. IBM describes watsonx.governance as “an AI governance toolkit to enable trusted AI workflows.” Hmmm. Let’s let IBM expand on that: “Operationalizes governance to help mitigate the risk, time and cost associated with manual processes and provides the documentation necessary to drive transparent and explainable outcomes. Provides the mechanisms to protect customer privacy, proactively detect model bias and drift, and help organizations meet their ethics standards.” This role is about removing unwanted imperfections and giving direction, while taking care of some of the deep technical complexities. More of a behind-the-scenes Brian Eno [5] role rather than a front-and-center Prince. Less is known about this member of the band, and it’s due to be released two months after initial release, but this could really be the critical piece that elevates the whole platform. How important? Think U2 after Brian Eno versus U2 before Brian Eno [6]. Let’s keep in mind how quickly the first optimistic media blushes of ChatGPT were followed by the scary headlines of “AI taking over your job”. And then how quickly did that escalate into “AI resulting in the end of humanity”? AI governance is clearly a need that is coming mainstream at a rapid rate and will surely be an essential component of any AI platform.
IBM — a strong label pedigree
OK, let us put the band/music analogy to one side. What makes IBM a good bet for a new data & AI platform? Well, they’ve certainly had some good hits (sorry) in this space before.
Let’s rewind all the way to 2011 and remind ourselves of the story of Watson, the AI that beat the best human Jeopardy player and “showcased the power of artificial intelligence… the beginning of a technological revolution about to sweep through society” [7]. Watson may have been a false dawn for IBM, but it did showcase the value generated from the significant investments made at IBM Research. Fast forward to the new, more confident IBM under Arvind Krishna, and maybe IBM has learned its lessons on how to take its technology pearls to market in a more business-focused way.
Further back, there was a notable period of increased investment in data products by IBM in the early 2000’s. During this time, IBM recognized the growing importance of data and analytics in the business world. They saw the potential for businesses to gain insights and make better decisions by leveraging data effectively, investing heavily in developing data-focused products and services. In 2005, IBM launched its “Information on Demand” initiative, aimed at helping businesses harness the power of their data and turn it into valuable insights — data management, data integration, and analytics capabilities. They further acquired companies like Cognos, SPSS, and Netezza, which enhanced their capabilities in data analytics, business intelligence, and data warehousing. These are all strong products that have made a significant impact on the market.
More recently, IBM has shown it’s willing to create rather than acquire with their Cloud Pak for Data platform [8] – a bold step in creating a born-on-the-cloud platform that tied together much of its heritage IP along with new components that are cloud-native. Significantly, it’s also portable across on-prem or any cloud with its OpenShift foundation. Also significantly, this has allowed IBM to offer SaaS/PaaS incarnations of their products, a move that is long overdue and more in step with their competitors. A quick note — I understand from IBM that watsonx is not a replacement for CP4D.
So in summary, IBM has had some successes in the data & AI space, and seems to be trending in the right direction in terms of how to package capability and take it to market in a way that suits their functional as well as non-functional needs. So, do they have their timing right?
In tune with the current hits?
What’s the mood music in the room?
Increasing scale.
Reducing cost.
ChatGPT-style human capabilities.
But avoiding redundancy/extinction-by-AI.
And on the face of it, IBM has read the top 10 current hits very well.
Scale is important when it comes to AI, and more specifically, data. We all know that training AI requires oodles of data. And this is why you need a lakehouse — bringing together all the data you need for data science. But this is typically the same sort of data you would put in a data warehouse with its more traditional BI/reporting use cases. A lakehouse means you don’t need two copies of all this data — warehouse for BI, lake for AI — just put them in a lakehouse which is good enough for meeting both needs. Running both a warehouse and a datalake can result in significant costs. Simple math tells us that replacing with a lakehouse is good for the bank balance! Also, without naming any names, organizations are realizing that their warehouse/datalake/lakehouse costs keep going up as more data moves in. One more interesting point — watsonx.data promises that it won’t require you to make a copy of the data in order to have it available on the platform. This brings the 2-in-1 cost benefits of the lakehouse with the further cost reduction of leaving certain data where it is.
There is a lot of architectural choice and flexibility packed into the IBM platform. Let’s dip our toe into one technical detail — IBM’s watsonx.data is based on Apache Iceberg lakehouse technology as opposed to the leading alternative which is Delta Lake. I’ll lift a quote: “While Delta Lake is mostly backed by Databricks, Iceberg is backed by many companies, including Netflix, Adobe, Alibaba, and many others. This means that Iceberg is becoming a standard in the industry. Wider open source commitment and adoption are huge by the industry. Many vendors are already baking Iceberg support” [9]. Time will tell whether Iceberg becomes the Betamax or the VHS of lakehouse technologies, but IBM is clearly putting themselves on a path of differentiation.
So watsonx.data is architected to scale to AI levels. Increasing scale…tick! Reducing cost…tick!
Yesterday’s machine learning excitement has been quickly eclipsed by generative AI [10].
I previously mentioned foundational models are a smart inclusion in this platform. Models that will generate language, models that will generate code, models that will understand the world around us. The bar for AI jumped up considerably over the last 12 months — and so has expectations. What was sci-fi just last year is a minimum expectation today.
Of watsonx.ai, IBM tell us:
“An initial set of foundation models will be made available in beta tech preview to select clients. Examples of model categories include:
fm.code: Models built to automatically generate code for developers through a natural-language interface to boost developer productivity and enable the automation of many IT tasks.
fm.NLP: A collection of large language models (LLMs) for specific or industry-specific domains that utilize curated data where bias can be mitigated more easily and can be quickly customized using client data.
fm.geospatial: Models built on climate and remote sensing data to help organizations understand and plan for changes in natural disaster patterns, biodiversity, land use, and other geophysical processes that could impact their businesses.“
With watsonx.ai you get generative AI-powering foundational models included with the platform. Human capabilities in a box…tick!
The other smart inclusion is AI governance. This is probably the most topical, although at levels approaching hysteria. Once all the hysteria dies down (pun not intended), there will be a very real set of AI governance requirements — ethical decision making, bias and fairness, transparency and explainability, robustness and safety, oversight and control — which makes governance an essential part of any AI platform. IBM already has a strong pedigree of data governance products, and AI governance is a logical inclusion. So, built-in AI failsafe mechanism (aka AI governance)…tick!
With watsonx, IBM looks to have read the mood music, leant on the best of its back catalog, internalized the current greatest hits, and possibly produced a new hit record that is on-trend, and seemingly of its time.
So when will the record be on the shelves and ready to purchase? (yes, yes, I’m dating myself — make that “When will it be available for streaming/download?”)
Release week? First live appearance!
If all of this has got you excited to be in the crowd for the first live performance, the good news is you don’t have to wait! IBM has recently dropped the first two components (watsonx.ai and watsonx.data), with watsonx.governance due “later this year.”
Greg Hodgkinson is Prolifics’ Chief Technology Officer and Worldwide Head of Engineering, and an IBM Lifetime Champion. As a technology leader, he’s responsible for innovative cross-practice solutions for our customers, creating a foundation for innovation in the company, and driving improvements in the art of software development and delivery throughout Prolifics.
This large freight company provides domestic freight and import/export services in six U.S. states. They became our client based on the successful solutions and services we provided to their sister company throughout a continuing long-term relationship.
This is the fourth installment in a series showing the progressive steps of taking this client from legacy, desktop computers to a modern cloud environment. The three previous success stories are:
1) A Simple First Solution Clears the Road for this Freight Company – We moved six, regionally siloed legacy databases to Azure Cloud, giving the client – for the first time – cloud back-up, central accessibility, and report running with a consistent view across the company.
2) Freight Company Makes Return Trip for Modernization – Prolifics performed a low-cost analysis over six weeks of daily meetings with the client, producing a fixed-price, cloud-based modernization plan. Our extensive report, documenting every single one of the business functions in the old application, showed the client we had a real understanding of their business and how to modernize it.
3) Freight Company Modernization Project Rolls On – While keeping their regional identities, we are consolidating the six locations into a single database in Azure using open-source code and related technologies. One focus here is on the user interface (UI) experience for employee productivity and satisfaction.
Challenge
Coming out of our client’s legacy systems are six different regional databases. Every time any employee needed to access a database, they had to enter a username and password, and could only get into one system at a time. Throughout the day employees would constantly “log in, log out, log in, log out again” to go from region to region. This made for inefficiencies, wasted time and frustrated employees.
Action
To address these access issues, we started with a user-creation section, where an admin would enter employee information and authorize employee access based upon the employee’s role and the location (referred to as an “instance”). This becomes their “default instance.” An employee may then be given access to other locations based on their role, i.e., roles can change based on location or they might not have access to a particular location. Now, upon signing in just once, the employee is then able to toggle among the locations and related roles that they are authorized for – called “switch instance.”
Admin user creation
User’s / employee’s switch instance
Result
The freight company is very happy with the solution – an employee will log in once and switch among the locations as authorized. Employees will save time and be more productive.
About Prolifics
At Prolifics, the work we do with our clients matters. Whether it’s literally keeping the lights on for thousands of families, improving access to medical care, helping prevent worldwide fraud or protecting the integrity and speed of supply chains, innovation and automation are significant parts of our culture. While our competitors are throwing more bodies at a project, we are applying automation to manage costs, reduce errors and deliver your results faster.
Let’s accelerate your transformation journeys throughout the digital environment – Data & AI, Integration & Applications, Business Automation, DevXOps, Test Automation, and Cybersecurity. We treat our digital deliverables like a customized product – using agile practices to deliver immediate and ongoing increases in value. Visit prolifics.com to learn more.
Innovation is not solely reserved for grand ideas; small ideas can often be the seeds of transformative solutions. In this blog, we will embark on a journey, Gartner-style, to explore how small ideas can be nurtured into impactful solutions. By incorporating elements such as idea management, proof of concept, technology selection, vendor lookup, implementation, and evaluation, organizations can navigate the path to success. Let’s dive into the Gartner-inspired process of turning small ideas into remarkable solutions.
Simple Framework
Idea Management: The first step on this journey is to establish a robust idea management system. Encourage employees from all levels and departments to contribute their small ideas. Implement a centralized platform, like an idea management software or intranet portal, to capture and evaluate these ideas effectively. Foster a culture that values and rewards innovation, embracing Gartner’s principles of inclusivity and open communication. By doing so, you can harness the collective intelligence of your workforce and unlock the potential of small ideas.
Proof of Concept or Technology: To validate the feasibility and potential of small ideas, consider developing a proof of concept (POC) or leveraging existing technologies. A POC involves building a prototype or conducting small-scale experiments to test the viability of the idea. Embrace Gartner’s recommended practices, such as setting clear objectives, defining a controlled scope, and establishing meaningful metrics for evaluation. The POC serves as a crucial steppingstone, helping you identify technical challenges, assess scalability, and gather feedback for further improvements.
Technology Selection: While small ideas may not require groundbreaking technologies, the right technology can amplify their impact. During the technology selection phase, consider Gartner’s research-based insights and recommendations. Evaluate technologies that align with your small idea and organizational goals. Assess factors such as scalability, integration capabilities, security, and long-term viability. Leverage Gartner’s Magic Quadrant reports or vendor evaluations to gain deeper insights and make informed decisions.
Vendor Lookup: In some cases, implementing small ideas may require external expertise or technologies not readily available within your organization. Conduct thorough research and evaluation of vendors who offer relevant solutions. Consider factors such as vendor reputation, expertise, cost-effectiveness, and compatibility with your specific requirements. Gartner’s market research and vendor analysis can serve as invaluable resources during this process, helping you identify potential partners who align with your goals.
Implementation and Evaluation: With the technology and vendor selected, it’s time to implement your small idea and turn it into a tangible solution. Assign dedicated resources to oversee the implementation process, ensuring a seamless integration and addressing any technical challenges that may arise. Once implemented, closely monitor the solution’s performance against predefined metrics and objectives. Gartner’s approach of continuous evaluation and feedback loops can aid in assessing the effectiveness and impact of the solution, facilitating further improvements and optimizations.
Summary
This blog can guide organizations on a transformative journey, turning small ideas into remarkable solutions. By fostering a culture of innovation through idea management, conducting proof of concepts, or leveraging existing technologies, carefully selecting appropriate technologies and vendors, and implementing and evaluating solutions, organizations can unlock the full potential of their small ideas. Remember, innovation is an ongoing process, so continuously iterate, learn, and refine your solutions. If you need assistance in navigating this journey and unlocking the potential of your small ideas, our team is here to help. Contact us today to embark on your Gartner-style innovation journey and transform your small ideas into remarkable solutions.
Contact our Prolifics team at Solutions@Prolifics.com to learn more about our innovation services.
“Remember, greatness can be achieved from even the smallest of ideas. Embrace the journey and let your innovation shine.”
Soundar Mannathan
About the Author: Soundar Mannathan is a senior blockchain architect with Prolifics, where he designs and develops solutions using open source technologies, enterprise products and cloud-based solutions. He has more than 16 years of experience in Cloud technologies, Blockchain/NFT, AI/ML, Java/J2EE, Spring/Spring boot, PL/SQL, Oracle, DB2, Mongo, Angular, React, Node and NPM. He holds an MBA from the University of Dayton and BA in Electronics and Communications Engineering from Anna University.
A secure, robust, and effective identity and access management system is one of the most important investments an organization can make. A common question often asked is, “Should these systems be deployed on-premises, or in the cloud?” In this article, we will explore the pros and cons of each option.
Introduction
Identity and access management (IAM) is a cybersecurity discipline involving a set of policies, business processes, and technologies for managing identities. It ensures that the right users and devices have appropriate access to necessary resources. It is a crucial component of any organization’s business and operational strategy.
Depending on your specific needs, an IAM solution can be deployed either on-premises, or in the cloud. Each of these scenarios comes with certain advantages or tradeoffs that need to be considered. “On-premises” systems are those hosted within the organization’s own infrastructure, whereas a cloud deployment means that the resources (i.e., software, servers, data, etc.) are accessed as services delivered over the internet. Cloud IAM would be provided by third-party vendors, typically in a subscription model, with offerings often referred to as Identity-as-a-service (IDaaS).
A business may be faced with this choice of on-premises vs the cloud whether they’re starting without any defined identity and access management, or have a rudimentary or incomplete system that needs to be expanded, or they may even be a mature large enterprise with a well-established legacy setup that requires modernization. In all cases, their existing makeup is a significant factor when formulating requirements for an IAM investment.
If keeping or developing an on-premises identity and access management system, an organization will own and be responsible for all aspects of the solution, from design to deployment, maintenance, and improvements, hosted and usually accessed within their network, in their own datacenters. Pursuing a cloud-hosted solution on the other hand, means entrusting these functions entirely to a third party as a paying consumer. So, which is ideal: the power and privacy associated with on-premises systems, or the agile accessibility of the cloud?
We’ll explore this question by looking at what I’ll call the “Core Central Concerns Considered by Companies Choosing Cloud Cybersecurity” or simply, “the 8 Cs” (then no more alliteration after this). They are:
One of the main advantages of keeping your IAM on-premises is the complete control it gives you over your security. You have full authority, visibility, and flexibility over your infrastructure, software, processes and data. Though you may purchase third-party products and support for your on-premises system rather than developing it all on your own, you are free to choose the software, hardware, vendors and licensing best suited for your needs. You can opt for configurations that might not be allowed by the cloud provider. As a consumer you are restricted to whatever permissions are offered by the provider, and limited by constraints inherent to shared hosted solutions.
Customization vs Consistency
This of course severely limits your ability to customize the offering and its functionality as you see fit. Because a cloud provider offers the same product to multiple customers, who even share the same computing resources, it isn’t very feasible to offer a significantly different experience for each customer. Their ability to scale the solution for mass use requires a fair amount of uniformity and consistency in the product. This does mean though that by using these services you can be assured that your IAM solution is consistent with standards in use by others.
Cloud IAM companies often do offer methods of customization for your tenant such as in branding, though this will be relatively limited. You will also have the option to select from a menu of packages and features per your budget and requirements, so it is not purely one size fits all. However, the nature of shared hosted software-as-a-service requires a somewhat standardized offering for all consumers. In contrast, with control and ownership of your own on-premises systems, you have the ability to fine tune all aspects to your company’s specific requirements, whether to fit your unique business processes and workflows or to integrate your IAM with existing custom applications, systems, data sources, or data formats.
Compliance & Confidentiality
Compliance with varied standards, policies, and regulations is a critical and often non-negotiable requirement for any business. This is especially true in the case of IAM, where many dominant standards place heavy importance on topics such as data access control, privacy, and protecting identities. Examples include SOX and GBLA for financial institutions, PCI for payments, HIPAA for healthcare, and FERPA for education. They all share goals for reducing risk by regulating access to records, especially of identifying information.
One good thing about using a vendor who specializes in identity and access management is that it’s a key part of their business to stay abreast of many of these regulations and to make sure that their products are compliant. As new standards arrive, they are able to update their product, which in the case of the cloud would then be effective for all customers. At the same time, an IAM vendor can only make technical features available. It is still the responsibility of the customer, i.e., the actual financial, healthcare or educational institution, to enact and execute the appropriate businesses policies, processes and practices to remain compliant with these regulations. In some cases, adherence to a particular standard may not even be possible under the constraints of a cloud model, and the organization may require a customized on-premises solution.
With data access restrictions being a core component of most standards, there are also some strict cases where storing data on third-party servers is just not an option and it must be kept on-premises. Even when not trying to comply with a specific external standard, for many companies it may be of utmost importance that their sensitive data resides within and is accessible only from their own network, and no third party has the possibility of gaining access. On-premises IAM guarantees that data is confidential to the organization and its employees.
There is also the matter of data residency, which refers to where data can geographically be stored. Data will be subject to laws by the country or state in which it is either originated or stored, and companies will often face restrictions on data residency in order to do business in that location. When using a cloud provider, you have far less control about where your data is stored and usually no way to verify its physical location. This is an important consideration if you need to follow rules related to where your data is or comes from.
That said, companies doing business in multiple locales will find themselves having to stay compliant with many different regulations, and it can be hard to keep track of them all. For instance, data privacy laws like CCPA (California) and GDPR (Europe) have rules and protections that apply everywhere. A good cloud solution can assist by offering tools that cover scenarios for users from every location.
Competency & Competition
For most companies, IAM is not central to their business in that it is not a direct source of revenue. For those that do have this focus, they are incentivized to keep up with all the standards, rules, and innovations that arise within the field. They can make large investments into research and development of these features, expecting it to pay off in dividends. With a core competency in IAM, they can even attract and pool top talent to achieve best-in-class implementations. In contrast, other companies may find it more difficult to dedicate comparable resources toward a fully realized and robust IAM system. Even when deploying packaged software on-premises from established IAM vendors, it often requires a skillset not found in-house, and external services are needed. When using the cloud, experts in the chosen IAM product are automatically the ones handling the deployment for you.
A core competency refers to those skills, resources and capabilities that make up your defining strength, giving a competitive advantage and allowing you to stand out from others. To remain competitive and attract or retain customers, cloud vendors are motivated to remain on the cutting edge and constantly improve their offering. Trends show that they’ve found it easier and faster to roll out these improvements in cloud-hosted applications rather than traditional packaged on-premises software, which is becoming more and more seen as “legacy” compared to the newer fast-moving entrants.
That said, it is very possible to find that deploying your own custom IAM solution with a private implementation over which you have full control, can provide the competitive advantage for your own company. A third-party offering that’s the same for all customers may not be superior to something crafted in-home specific to your own business and workflows. Identity and access management is so key to so many business activities that creating features or processes that uniquely complement your organization can result in a standout defining strength.
Complexity & Convenience
Proper identity and access management is not a trivial undertaking. Achieving an effective setup in something as crucial as cybersecurity, while covering everything needed to address compliance issues, can be incredibly complex. This is why the use of a cloud vendor that is dedicated to IAM as their core business can be very attractive, allowing your organization to be a simple consumer of these services. This can be very convenient for companies with standard uncomplicated IAM needs. As a matter of fact, many of the capabilities required to respond to modern security trends are highly complicated or cumbersome for an individual organization to maintain, such as multifactor authentication (MFA) requiring SMS or push notification systems, or biometrics and FIDO (Fast Identity Online) for passwordless authentication. It is usually far easier to let a cloud vendor provide these.
In other cases though, a customer may have specific requirements for their organization that due to the limited customization of the cloud, may prove even more complex to integrate with their existing applications or systems. Deploying an on-premises solution could be the simpler, more convenient option in such instances.
Another vastly common scenario is that most companies have some existing traditional or legacy IAM system that they are considering moving to the cloud. Depending on how expansive these systems are, or how much automation is in use, or how embedded it is with on-premises applications and workflows, it can be overwhelmingly complex to migrate these to the cloud or to even adapt a cloud solution to integrate with your various on-premises applications. In such situations, it may be more convenient to upgrade these systems on-premises.
Costs & Capital
Arguably the foremost concern for most organizations is the question of what upfront capital and ongoing costs are required to deploy and maintain an effective IAM solution. An on-premises solution built or deployed from scratch will require at least a much larger immediate cost, and then will require on-going resources, skilled IT talent, and possible licensing fees to maintain. As discussed, the inherent complexity of robust IAM and the on-going vigilance required to maintain compliance with various regulations only amplifies the amount needed to do it right.
Cloud solutions, rather than a huge initial investment, will charge ongoing monthly or annual subscription fees. This provides an advantage by allowing a certain predictability in budgeting or accounting. At the same time, customer organizations have no control over those fees (past the duration of a contract) and are subject to any future increases at the whim of the provider. With vulnerability to vendor lock-in due to the inconvenience of switching cloud providers, this can be a concern. For some, it is even possible for the total cost of ownership (TCO) of the cloud (with its perpetual subscription fees) to end up higher than an upfront cost of on-premises if spread over the system’s entire lifecycle.
Even so, it should be simpler to quantify or estimate funding for IAM when subscribing to a cloud service, when compared to calculating the costs to hire talent, develop applications and fulfill various on-premises software licensing models. Overall, for most typical organizations it is normal for cloud IAM to cost less than maintaining your own on-premises solution. This is especially true since IAM-focused cloud vendors are able to defray or spread the costs of their investments over their entire customer base.
Connectivity & Collaboration
A large benefit of deploying your IAM systems on-premises within your network is the ability to be sure all devices can communicate with no lag or latency, and limit external bandwidth costs. Keeping your data within the network perimeter, instead of accessed over the internet, also enhances security by reducing your attack surface. However, there are new realities within the modern workplace that render a strict network perimeter barrier not as feasible as it once was. For one, the modern workforce is distributed; remote work is commonplace, and not only are employees spread geographically but they even move from place to place and expect to be able to continue working at all times. This trend also includes increasing use of personal BYOD (bring your own device) and mobile devices (vs PCs and work-issued devices) that may not be connected to your network at all.
Additionally, the applications used by the average modern employee are more and more comprised of cloud-hosted SaaS and other publicly available applications, no longer limited to those developed in-house and only accessible on the network. Everything from office software to project management, ticketing systems, ERP and a myriad of other software examples may already be in use as cloud-delivered services at your organization. When so many applications are in the cloud, even an on-premises IAM solution has to contend with integrating all of these tools in a secure manner with on-premises enterprise systems. Cloud-hosted IAM is usually a very effective way of integrating these varied SaaS applications, due to the focus on industry federation standards for interoperation such as SAML and OIDC, and often with pre-built templates and integrations for known applications.
Most importantly they simplify access for a remote workforce, meeting them wherever they are at any location on any device, while still implementing your organization’s rules and policies for access control. They can even be a solution to lag and latency, not only because they may have multiple data centers in different regions allowing them to be close to where your users are, but also often have content-delivery-networks (CDNS) for serving cached assets from the edge.
This all doesn’t just apply to your workforce; identity and access management is also important for your customers, which amplifies the stated challenges around geographic distribution, personal devices, and public application integration. Connecting all these external customers, employees and partner applications with your own on-premises systems requires additional investment and complexity around network infrastructure such as load balancers and switches. Relying on a cloud IAM system can make this aspect much easier.
Confidence & Contingencies
Due to the criticality of cybersecurity, another frequent concern is what are the worst-case scenarios that can occur, and what are the contingencies that can be included in your IAM solution to deal with these? For example: can users still access applications in the case of a network interruption from your ISP? When servers go down or are lost, is data lost with them or is there disaster recovery available? How devastating is a security breach by a bad actor?
Dependence on the cloud means that when something goes wrong, you lose functionality with no direct means to address it, which can be quite frustrating as you’re forced to open a support ticket and wait for a resolution. You do not have the ability to design your own disaster recovery (DR) and will have limited visibility into what’s even gone wrong.
On the other hand, for some companies it is a relief to entrust issues like reliability and DR to a cloud provider, opting for the peace of mind of not being responsible for these. For one, many cloud systems are distributed geographically, with failover and disaster recovery built in. They often guarantee a high percentage of uptime, more than the organization might be able to deliver themselves. However, it’s never 100 percent, and it is not uncommon to hear of public clouds going down and taking businesses down with them for extended periods. So bear this in mind as you weigh your comfort with and confidence in each option.
As for breaches, this is the largest concern of any IAM team that wants to stay out of the news. An on-premises solution reduces your attack surface, both by keeping data behind firewalls as well as the fact that the shared cloud product could be breached at a level unrelated to your organization, possibly by persons with superuser access that never even have to come in contact with any of your employees. For example, a highly publicized data breach of a cloud IAM provider in 2022 was attributed to the compromise of a third-party support engineer working for a subcontractor. With a well-designed cloud integration though, a breach in the IAM product shouldn’t grant access to the company’s most sensitive systems, especially if privileged vaults and secrets are kept on-premises.
What is important for an organization in the aftermath of such crises is full visibility into logs, records, and systems which would allow an administrator to perform the necessary audits or troubleshooting required to find and rectify the issue, enabled by comprehensive monitoring. This kind of access may only be possible on-premises and can be quite frustrating when dependent on a cloud provider. That said, cloud vendors often do offer impressive reporting capabilities, even if limited to the activities that happen inside their product.
Comparison
On-Premises
Cloud
Complete control over configuration
Constrained to limited permissions
Customize uniquely as needed
Consistent for all customers
Confidentiality, privacy, residency of data
Compliance with changing regional standards
Can add competitive edge in some cases
Core competency, central to their business
Complex, but maybe less than migrating
Conveniently hidden complexity
More upfront capital, but choice/control
Consistent predictable subscription costs
Communication over a closed network. Harder to enable external collaboration
Connecting workers, consumers regardless of device or location
Confidently prepare for, investigate and respond to crises
Contingencies built in for resistance to failure
Conclusion
I hope this has shed some light on the tradeoffs involved in the consideration of cloud vs on-premises identity and access management. As you have probably been able to tell, there is no one true answer to which is the better choice for an organization, as it completely depends on individual requirements and needs as well as the company’s resources, existing applications, and more. However, trends do indicate that for the average business, the cloud affords a robust and effective IAM solution for a more predictable and possibly lower cost than trying to implement a full IAM solution on-premises.
That said, our recommendation for most companies would actually be to go with a hybrid solution. Combining on-premises and cloud components offers the maximum flexibility and diversity of features, while allowing you to balance what concerns are most important for your own compliance requirements and access needs. For the typical organization that would have a legacy on-premises solution in place and is looking to modernize, hybrid IAM offers a phased approach where individual components can be moved to the cloud when and as appropriate. For example, an on-premises user directory can be used as an identity source for a cloud hosted authorization server and catalog of federated applications.
In my role as a Security Engineer at Prolifics, I’ve been able to support organizations with cloud, on-premises and hybrid deployments, as well as migrations in-between them all. If you have any questions on the topic or would like to share your own experience, we’d be glad to hear from you.
Craig is a senior Security/IAM engineer and consultant with more than 15 years of experience in IT, including backgrounds in programming, DevOps, and system administration, and over seven years of extensive experience in identity and access management.He has worked with industry-leading IAM solutions such as Okta, RSA, and IBM Security Verify, as well as middleware such as IBM WebSphere and MobileFirst. He is also experienced in scripting and application development with a wide variety of languages and tools.
For most of us today, the metaverse means people wearing virtual reality (VR) headsets, representing themselves as an icon or animated figure (avatar) in an online game. Others may think of augmented reality (AR), like generated overlays of information coming onto whatever you’re looking at through your smart phone. Others may know about cryptocurrencies like Bitcoin, or non-fungible tokens (NFTs), which are digital assets with special identification codes and rights that make each one unique in the otherwise easy-to-replicate digital world. What all the above have in common, and what really is the basis of the metaverse, is the concept of immersing yourself from the real world into the digital world, in which you interact and share experiences with others in the digital world in real time. Like the early days of the internet, the potential of metaverse appears unknown but seems unlimited.
How businesses are using the metaverse today
It’s moving from early adapters to the early majority/mainstream stage. Businesses today are using the metaverse for enhancing customer engagement and extending market reach, while also improving efficiency and scalability. Examples going on right now include:
Setting up virtual storefronts to sell virtual and real-world merchandise
Placing virtual ads, sponsoring virtual events or creating digitally branded experiences to promote products and services
Hosting virtual events and charging virtual ticket fees or selling virtual merchandise
Partnering with game developers to create branded content and promote their products
Purchasing virtual land in a popular metaverse platform and charging rent or selling it for a profit
As we said in a prior blog post, “Although the metaverse is in its infancy, its potential is undeniable. Radio, television, and the internet all came before with their world-changing effects. The metaverse is next in that line of technological advancements that savvy business organizations will adopt for their own specific goals.”
What’s Next for Your Business? Step into the Future with Metaverse as a Service (Maas)
With Prolifics and our MaaS, you don’t have to do it alone. We handle the technical complexities and provide the necessary tools for you to explore the virtual world. Enhance customer engagement, extend market reach, improve efficiency, and scale your business with our expertise in IoT, digital twin, virtual, and augmented reality.
Post Author – Mike Hester, Senior Data Architect, Prolifics
Businesses are expanding and collecting more data that is rapidly becoming more complex. The data being collected is commonly used for marketing purposes, helping to make the customer user experience better and will ultimately drive business decisions. This data will come from many disparate sources and will need to be stored in a consistent manner that can be used by everyone within the organization.
“According to the results of a survey on customer experience (CX) among businesses conducted in the United States in 2021, the main challenge affecting data analysis capability for CX is the lack of reliability and integrity of available data. Data security followed, being chosen by almost 46 percent of the respondents.”
To address these challenges many companies are turning to data repositories. A data repository, also known as a data library or data archive, is an entity that will be isolated and is for long-term storage and storing data for analytic and reporting purposes. A data repository is large and is generally made up of many databases.
Some examples of data repositories –
Data warehouses store large amounts of aggregated data and are not always necessarily related.
Data lakes are large repositories which generally store unstructured raw data, with that data being classified and tagged with metadata. Raw or unfiltered data means it has not been filtered or structured and does not have a predetermined use case.
Data marts are a subset of a data repository and are more targeted for a particular type of business need or user. Data marts are more secure since the users can only access what they need and not the entire data repository.
Metadata repositories store data about data and databases. Metadata can generally explain the lineage of the data, where it was sourced and any additional information that may be important.
Benefits of a data repository
There are many advantages to storing large volumes of data in an isolated manner which allows the business to make informed, data driven decisions. Data repositories require large investments of money, resources and time.
Storing multiple data sources in a single place makes it easier to manage, analyze and report
Isolation allows for faster and less complex reporting or analysis since the data is clustered
Workload for administrators is reduced due to isolation and compartmentalization of the data
Data is preserved and archived
Disadvantages of a data repository
There are several vulnerabilities that exist in data repositories that corporations must manage effectively to mitigate potential risks, including:
Growing databases and data sets may slow down corporate systems. Ensuring that database systems can scale with data growth is mandatory.
When systems are isolated a system crash can affect all of the data. This can be mitigated by a solid backup strategy that limits and isolates access.
In some cases, unauthorized users may be able to access all or large volumes of sensitive data more easily than if it was distributed across several locations.
Data repository vs a data warehouse
A data repository consolidates data sets from various sources and isolates them in order to make them easier to access and mine for business insights, reporting needs, or machine learning. It is a general term, whereas a data warehouse is a specific subtype of a data repository designed for collecting and storing structured data from multiple source systems across an enterprise.
A data warehouse is best suited for providing a broad, historical view of large data sets integrated from multiple sources to drive strategic decisions that affect the entire enterprise. Other types of data repositories are better suited for handling unstructured or complex data formats, analyzing data for different subsets of business operations, and other use cases.
Data repository best practices
Selecting the right/correct extract, transform, load (ETL) tools or applications to load the data repository is key to ensuring data quality throughout the data lifecycle.
Initially it is best to limit the scope and breadth of a data repository. Storing and maintaining smaller data sets and limiting the number of subject areas is beneficial. This can aid in maintaining quality. Over time growth will happen and more complexity and subject areas should be added.
Automating the loading of a data repository should be a priority. Manually running processes to load and maintain a repository will be too difficult as the system grows in both volume and complexity. Automated processes help with the management of schedules for things like source file receipt, ensuring proper data hierarchy, i.e., parent-child relationships, and process recovery.
Prioritize flexibility. A data repository should be scaled as new sources and targets are introduced, as well as different types of data, such as unstructured data. Incorporating a design that allows for growth without rework should be the goal of architecting a data repository.
If you’d like more information on data repositories or would like to discuss your data needs, click here.
About the Author:
Mike Hester is a Senior Data Architect at Prolifics with 36 years of experience in information technology, specializing in DSS/Data Warehousing. He has worked in various roles such as project manager, system analyst, technical architect, and developer, delivering information management solutions in industries like government, engineering, and ERP. Mike is familiar with operating systems like Mainframe, UNIX, VMS, and Windows, and has worked with databases including Teradata, Oracle, SQL Server, DB2 UDB, DB2 DPF, and Netezza.