September 12, 2024 No Comments

Introduction

Integrating machine learning models into serverless architectures has become an essential strategy for modern developers, enabling scalable and efficient application deployment. In this tutorial, we delve into the seamless integration of Amazon Bedrock models with AWS Lambda, harnessing the power of serverless computing to invoke complex machine learning algorithms. Amazon Bedrock, known for its robust AI and ML capabilities, combined with the flexible, event-driven nature of AWS Lambda, offers an unparalleled approach to building intelligent applications.

This guide will walk you through the step-by-step process of setting up and invoking Bedrock models using AWS Lambda. From initial setup and configuration to practical implementation and optimization, we cover all the necessary details to ensure a smooth and effective integration. Whether you’re a seasoned developer or new to serverless computing, this comprehensive tutorial provides valuable insights and practical tips to help you leverage the full potential of Bedrock models within your AWS Lambda functions. Get ready to transform your applications with advanced AI capabilities and elevate your serverless architecture to the next level.

Prerequisites

  • AWS Account With the Admin access permission

Step 1: Check the Available Model in your account

  • Navigate to the AWS Bedrock service.
  • From the AWS services menu, select Bedrock Service and expand the menu on the welcome page.
  • Scroll to the bottom of the left panel and click on ‘Model access’ to view your available model accesses.

Step 2: Create the Lambda function and invoke the model

  • Navigate to the Lambda service from the AWS services menu and click ‘Create Function’.
  • Provide a name for your function and select the runtime, such as Python 3.12 or another Python version.
  • Click ‘Create Function’ to be redirected to the code editor for the Lambda function.
  • Note: We currently utilize the “Amazon Titan Text models” in our code request format. If you opt for a different model, the request format may be different.
  • Access more information about the model using the following link: https://docs.aws.amazon.com/bedrock/latest/userguide/model-ids.html
  • In the code editor, write the following code:
				
					import json
import boto3

AWS_ACCESS_KEY = <Your_Access_Key>
AWS_SECRET_KEY = <Your_Secret_Key>
REGION_NAME = <Your_Region>
def call_bedrock_model(userPrompt):
    
    brt = boto3.client(
        service_name='bedrock-runtime',
        region_name=REGION_NAME,
        aws_access_key_id=AWS_ACCESS_KEY,
        aws_secret_access_key=AWS_SECRET_KEY
    )
    county = userPrompt
    
    prompt = f"""Tell me about the Capital of the {county}"""
    # Define the request body
    body = json.dumps({
                "inputText": prompt,
                "textGenerationConfig": {
                    "maxTokenCount": 10,
                    "stopSequences": [],
                    "temperature": 0.7,
                    "topP": 0.9
                }
            })
    
    modelId = 'amazon.titan-text-express-v1' # Model ID
    accept = 'application/json'
    contentType = 'application/json'
    try:
        response = brt.invoke_model(body=body, modelId=modelId, accept=accept, contentType=contentType)
    
        response_body = json.loads(response.get('body').read())
    
        # text
        print(response_body.get('results')[0]['outputText'])
        reponse_text = response_body.get('results')[0]['outputText']
        
    except Exception as e:
        
        print("unexpected event.", e)
        return "Internal Server Error."
    
    return reponse_text

def lambda_handler(event, context):
    # TODO implement
    body = event.get('body', '{}')
    body_json = json.loads(body)
    userInput =  body_json.get('inputText', None)
    response = call_bedrock_model("India")
    print(response)
    return {
        'statusCode': 200,
        'body': json.dumps(response)
    }

				
			
  • Click to delopy for the testing and make changes on the server.

Step 3: Create the Function URL and testing

  • Navigate to the created Lambda function, click on ‘Configuration’, and go to the ‘Function URL’ tab.
  • Click on the “Create Function URL” button, select “None” for authentication type, and click “Save”.
  • Copy the generated Function URL.
  • Go to the “General Configuration” tab to increase the request timeout.
  • Click on the “Edit” button, set the timeout from 1 minute to 15 minutes, and click “Save”.
  • We can use cURL or Postman request for testing.

1. Using cURL request

				
					curl --location 'URL' \
--header 'Content-Type: application/json' \
--data '{
    		"inputText": "India"
}'
				
			

2. Using Postman request

  • Open Postman and create a new request.
    • Select the POST method and paste the Function URL in the URL section.
    • Go to the “Body” section, select “raw,” and choose “JSON” format.
				
					{
    “inputText”: “input_value”
}

				
			
  • Click “Send” to test the Lambda function and invoke the Bedrock agent.
  • Ensure to replace “input_value” and URL with the input and function URL your Bedrock agent requires.

Conclusion

Integrating machine learning models into serverless architectures has become indispensable for developers seeking scalable and efficient application deployment. Throughout this tutorial, we explored the seamless integration of Amazon Bedrock models with AWS Lambda, leveraging serverless computing to invoke sophisticated machine learning algorithms. Amazon Bedrock, renowned for its robust AI and ML capabilities, when combined with AWS Lambda’s flexible, event-driven architecture, offers a powerful framework for building intelligent applications.

Write a comment

Your email address will not be published. Required fields are marked *

Want to talk to an Expert Developer?

Our experts in Generative AI, Python Programming, and Chatbot Development can help you build innovative solutions and scale your business faster.