MLOps Community
+00:00 GMT

Building LLM Platforms for Your Organisation – Step 2 Platforming.

Building LLM Platforms for Your Organisation – Step 2 Platforming.
# LLMs
# Machine learning
# MLops

Resuming from my initial article on the 31 of October, stating that a clear understanding of the evaluation pipeline simplifies subsequent stages of deploying LLMs to production

January 29, 2024
Abiodun Ekundayo
Abiodun Ekundayo
Building LLM Platforms for Your Organisation – Step 2 Platforming.

Resuming from my initial article on the 31 of October, stating that a clear understanding of the evaluation pipeline simplifies subsequent stages of deploying LLMs to production.

I planned to cover the remaining steps, but Quantum Black’s comprehensive article has since addressed these topics fairly effectively (If you haven’t read it I recommend giving it a look). Therefore, I’ll focus on designing generative AI solutions, translating the Quantum Black reference architecture into AWS services, and explaining the architectural choices. I welcome discussion to refine this design.

The Key Takeaways from the Article for me were the following points:

  1. Service-Based LLMs
  2. LLM-Agnosticism
  3. Separated Unified Reporting Layer

With my background in deploying various solutions across enterprise environments, certain unique enterprise considerations have become second nature.

In this article, I will highlight several of these that are typically essential for any enterprise-level deployment. These aspects are crucial for teams aiming to integrate Knowledge Assistants (KAs) into most enterprise settings.

To demonstrate this, let’s envision a hypothetical situation involving a fictitious company named CryoDyne, a global pharmaceutical powerhouse, taking a bold step to incorporate AI, including but not limited to Large Language Models (LLMs), into their enterprise strategy, while still bound by various compliance rules including GDPR and HIPAA.

This hypothetical scenario, likely to be echoed globally this year in compliance heavy environments (Finance, Health, Public Sector), in our hypothetical scenario it places specific immutable requirements on all solutions and services developed for CryoDyne, including:

  1. CryoDyne operates a centralised or at least partly centralised API gateway, which serves various internal teams, each governed by unique Role-Based Access Control (RBAC).
  2. The integration of the knowledge assistant into this framework is a key initiative for the company, aimed at providing department-specific KAs, each adept in their particular domain language.
  3. CryoDyne upholds strict compliance standards, leading to the implementation of processes such as separation of duties meaning the company’s reporting structures are intricately segmented, often necessitating coordination with a specialised data team or manoeuvring through several firewalls and accounts for data exchange.
  4. As a pharmaceutical company, CryoDyne must adhere to stringent regulations such as HIPAA and GDPR in all software implementations. As an AWS client, the company already employs a robust enterprise cloud deployment strategy, incorporating services like AWS Config, AWS Control Tower, AWS Organizations, and Service Catalog. This is complemented by a multi-layered setup of Service Control Policies (SCPs) across their cloud accounts to ensure compliance and security.
  5. The company is committed to robust change management processes for its production deployments and expects similar automated processes to be in place on the part of their vendors, in line with industry best practices.
  6. CryoDyne’s enterprise architecture department requires a detailed peer review of any proposed solution designs.
  7. Security is paramount, with a stipulation that all services must be secure and only accessible to those within the company who have the necessary permissions.

Bearing these critical elements in mind, let’s delve into the “Quantum Black” article and examine how AWS services can be effectively utilised to platform this initiative.

Data Layer

The foundational article outlines a sophisticated data management strategy for modern enterprises, focusing on a Data Lake and Python base data processing libraries like Kedro. This system facilitates parsing, chunking, metadata enrichment, and vectorisation, leading to an organised vector database.

  1. Separation of Concerns: A standalone account for data management activities reinforces security and adherence to compliance norms.
  2. Data Access Control: This as with everything security based needs to be layered, typically vector databases interact with the user through http and APIs as such the natural fit was the API Gateway with RBAC.
  3. Data Staging: We’ve selected Amazon S3 as our dependable staging (landing) platform for its capacity to handle a multitude of data formats efficiently and its “infinite” scalability.
  4. Data Processing: For data engineering tasks such as parsing, chunking, and vectorisation, we leverage AWS services like Step Functions or potentially AWS Managed Airflow. The article references Kedro, a Python framework by Quantum Black, which might indicate a bias. However, various libraries exist ( LangChain , LlamaIndex ) for chunking and vectorization. These tasks are time-consuming, involving data partitioning and iterative processing for each partition, making Step Functions or Airflow suitable choices.
  5. Vector Store: Here you have plenty to consider. Amazon a managed service called OpenSearch, however many vector databases like Qdrant and Chroma offer container version that can be deployed with mounted volumes. This might need prototyping work to arrive at the correct final decision.
  6. Knowledge APIs: Knowledge APIs can be used along with vector DBs to enhance RAG applications so including AWS Neptune could potentially play a pivotal role in managing graph-based query operations.
  7. Scalability Concerns: In pursuit of handling vast data processing demands, we are investigating solutions that are compatible with S3’s scalable storage capabilities.
  8. Dynamic Data Access for LLMs: Function calls are designed to facilitate a dynamic and interactive engagement with LLMs within the datastore.
  9. Data Access Control Module: Central to our architecture is the API Gateway with RBAC, which ensures data layers are only accessible to authorised personnel.
  10. Vector Caching: There are many scenarios where semantically similar queries are not quantitatively identical, for instance: “How much are the adidas trainers” and “How expensive are those Adidas” semantically require the same response. There is no point making a round trip for the latter if you have the cached response for the former. DynamoDB seem a good noSQL option to implement this.

The Data Layer serves a dual role:

  1. It consumes embeddings from the LLM Gateway.
  2. It provides data and retrieval-augmented generation (RAG) capabilities to the Application Layer.

In addition to the immutable constraints, the skills required to maintain its vast amounts of data lie within the companies centralised data team and will lean on already tried and tested ETL/ELT techniques and tools.

LLM Layer

The LLM Layer, central to our architecture as detailed in the Quantum Black paper, serves as the primary hub for processing language model requests. It comprises critical elements like the LLM API Gateway for scalable integrations and facilitating LLM Agnosticism, alongside a logging platform for data analytics and enhancement insights.

Embracing ‘LLM gateway’ reflects a readiness to separate LLM APIs from applications, enabling swift replacement of LLMs—a crucial factor given the rapid evolution and diversity of models. This adaptability is vital, especially when integrating with the complex AWS environment.

The LLM Layer leverages a comprehensive suite of AWS services to augment the lifecycle of machine learning models:

  1. Amazon SageMaker for building, training, and deploying models, offering a pipeline that can and should include evaluation steps resulting in a production ready sagemaker endpoint.
  2. Amazon DynamoDB, ensuring quick, consistent NoSQL database performance with easy scalability.
  3. Amazon CloudWatch for monitoring cloud resources and applications on AWS.
  4. Amazon API Gateway to efficiently create, manage, and secure APIs at scale.
  5. AWS CodePipeline for automated continuous integration and delivery.
  6. AWS CodeCommit for secure cloud-based code storage (More likely to be GitHub in most companies)
  7. AWS CodeBuild for continuous integration tasks like compiling, testing, and packaging software. (GitHub Actions also a likely candidate)
  8. AWS CodeDeploy for automated code deployments across environments.
  9. Amazon Elastic Container Registry (ECR) for managing Docker container images.

Data Science Account

In our enhanced model framework, I’ve implemented the ‘AI Factory’, a specialized environment dedicated to refining Large Language Models (LLMs) for domain-specific lexicon, in the case of CryoDyne possibly molecular structure analysis in drug discovery. We utilise AWS SageMaker Studio and SageMaker JumpStart for the deployment of both standard machine learning models (which are often more appropriate for many use cases initially thought to be suitable for LLMs) and HuggingFace open source LLMs. This layer acts as a central hub for LLM development, offering a dedicated testing space for different business sectors. It supports experimentation with both proprietary LLMs from companies like Anthropic and Cohere, as well as open-source models, subject to legal approvals.

The AI Factory is designed to perpetuate a cycle of continuous improvement through an LLM Evaluation Pipeline. This could include A/B testing frameworks leveraging the model variants feature in SageMaker, allowing for comparative analysis and optimisation of different models. This setup ensures a dynamic and adaptable deployment of LLMs throughout various corporate functions, aligning with evolving enterprise requirements.

Reporting Layer

I have extended the interpretation of this account to be the service account including being an account factory. Cost control room and early warning account.

The reasoning for this lies in immutable requirement 4. All accounts are created using Control Tower landing zone architecture which already has centrally managed logs and seems an ideal place to place cost reporting and FinOps operations

The Reporting Layer, integral for transparency in costs, usage, and data analytics, is implemented using AWS services like CloudWatch and Cost Explorer. This layer is designed to provide comprehensive insights into the KA’s operational dynamics, crucial for both management and continuous improvement, as noted in the foundational article.

The following services were chosen:

  1. LLM API Gateway: This is a custom-named service in the architecture using Amazon API Gateway, which is used to create, publish, maintain, monitor, and secure reporting related APIs.DynamoDB: A NoSQL database for user metadata
  2. Cost Explorer: A service that enables you to visualise, understand, and manage your AWS costs and usage over time.
  3. Alarm: This is referring to Amazon CloudWatch Alarms, leveraging this for an early warning system for specific conditions in your application and sending notifications or taking automated actions.
  4. Budgets: AWS Budgets gives you the ability to set custom budgets that alert you when your costs or usage exceed (or are forecasted to exceed) your budgeted amount.
  5. Cost & Usage Report: This refers to AWS Cost and Usage Reports service that delivers detailed reports on your AWS costs and usage.
  6. AWS Organizations: Enables the logical grouping of accounts with similar policy-based management needs. This feature allows CryoDyne to automate compliance requirements across all its AWS accounts, aligning with their immutable requirements.
  7. CloudFormation: Infrastructure as code required for consistent infrastructure deployments across the enterprise. (Terraform also a popular choice)
  8. Config: AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources.
  9. Control Tower: AWS Control Tower is a service that provides the easiest way to set up and govern a new, secure, multi-account AWS environments for large organisations.
  10. Service Catalog: AWS Service Catalog allows organizations to create and manage catalogs of IT services that are approved for use on AWS.
  11. EventBridge: Amazon EventBridge is a serverless event bus service that you can use to connect your applications with data from a variety of sources.
  12. CloudWatch: Amazon CloudWatch as a central monitoring and observability service for AWS cloud resources, dashboards related to KA metric can be collated and rendered.

This represents a typical AWS environment setup for monitoring and managing a multi account structure with AWS governance services, particularly focusing on cost management, resource configuration, and service orchestration.

Application Layer

The Application Layer, where user interactions occur, comprises the frontend, operational stores, configuration stores, and backend. This is where the bulk of software development will be delivered. Ultimately this work will be software based and as such container seem the logical choice leveraging AWS container orchestrations services like EKS, ECS, Fargate, and Lambda, along with a React-based frontend have been selected.

I decided to just include them all as it really depends on many factors including cost, scale, expected ave execution time etc.

  1. User Interface. The article talks about using react framework. I also always like to include the possibility of a chatbot seeing as chatbots represent the most natural way to converse with emojis providing wealth of evaluation feedback.
  2. EKS (Elastic Kubernetes Service): This is a managed service that makes it easy to run Kubernetes on AWS without needing to install and operate your own Kubernetes control plane.
  3. ECS (Elastic Container Service): A highly scalable, high-performance container management service that supports Docker containers and allows you to run applications on a managed cluster of Amazon EC2 instances.
  4. Fargate: A compute engine for Amazon ECS that allows you to run containers without having to manage servers or clusters.
  5. Lambda: A compute service that lets you run code without provisioning or managing servers. AWS Lambda executes your code only when needed and scales automatically.
  6. DynamoDB: A fast and flexible NoSQL database service for all applications that need consistent, single-digit millisecond latency at any scale.
  7. API Gateway: As explained in other layers.
  8. Application Container Registry: Although not an AWS service by this exact name, it is likely referring to Amazon Elastic Container Registry (ECR), a fully-managed Docker container registry that makes it easy for developers to store, manage, and deploy Docker container images.
  9. S3 Bucket for Log Storage: This bucket will be the central logging bucket with some of its content being shared to the reporting layer for cost management and the Data Science layer for feedback.

Conclusion

Here is our strategy for incorporating LLMs into AWS services and architecture is inspired by Quantum Black’s principles, focusing on crafting efficient, scalable, and secure solutions for contemporary enterprise issues such as those faced by CryoDyne Pharma. Alongside AWS services, we will be implicitly utilising Docker for containerised application management, React for user interface development, and JavaScript for front-end design, also services like AWS VPC, subnets, AWS PrivateLink, IAM, etc are assumed to be leveraged in each account but beyond the scope of this discussion and generally applied in a repetitive nature. Next, I will be adapting this these solutions for Azure and GCP platforms.

The Full Diagram can be found here

I will be in Copenhagen from Jan 29th to Jan 31st. New York Jan 21st to Jan 28th (Seattle 24th). Orlando Feb 11th to Feb 14th If you are about and you want to talk about all things Cloud, MLOps and LLMs or just talk tech in general. Feel free to connect on LinkedIn, always happy to have conversations online.

Acknowledgments

Acknowledging Dr. Sokratis Kartakis and Heiko Hotz from AWS (Heiki is soon to be Google DeepMind congrats to him), who pioneered architectures for standardising LLM deployment and operations on AWS. This work draws significant inspiration from their insightful Twitch talk. They have worked on some great starter libraries for LLM pipelines at scale on SageMaker (disclaimer: might not work out of the box).

Dive in
Related
Blog
How to Build a Knowledge Assistant at Scale
By Mohamed Abusaid • Dec 22nd, 2023 Views 96
Blog
How to Build a Knowledge Assistant at Scale
By Mohamed Abusaid • Dec 22nd, 2023 Views 96
Blog
Driving Business Innovation with NLP and LLMs – Part 1 – QA Models
By Mohammad Moallemi • Jul 7th, 2023 Views 70
Blog
Building the Future with LLMOps: The Main Challenges
By Andrew McMahon • Aug 28th, 2023 Views 82