Open in app
Home
Notifications
Lists
Stories

Write
Ram Vegiraju
Ram Vegiraju

Home

About

Pinned

About Me — Ram Vegiraju

My Top Medium Stories — Subscribe Here & Join My Newsletter — Hey everyone, thank you for taking the time to visit my page! I just wanted to give a quick introduction and background of myself on this article as well as share some of my top performing articles. I recently graduated from the University of Virginia in 2021 and moved out…

Personal

2 min read


Published in Towards Data Science

·1 day ago

Deploying SageMaker Endpoints With CloudFormation

Infrastructure As Code With SageMaker — In the past I’ve worked with SageMaker Deployment through Jupyter Notebooks and Python scripts. This is completely fine, but often times in the scope of a larger applications, you need to be able to define your SageMaker resources with the rest of your infrastructure in a central template. This brings…

Sagemaker

5 min read

Deploying SageMaker Endpoints With CloudFormation
Deploying SageMaker Endpoints With CloudFormation

Published in Towards Data Science

·Aug 8

Debugging SageMaker Endpoints Quickly With Local Mode

Stop Waiting For Your Endpoints To Create — For frequent users of SageMaker Inference a common frustration is being able to debug endpoints quickly. Often times with SageMaker Endpoints you end up with a custom inference script that helps you control the pre and post processing of your model. Initially when I first started with SageMaker I would…

Sagemaker

4 min read

Debugging SageMaker Endpoints Quickly With Local Mode
Debugging SageMaker Endpoints Quickly With Local Mode

Published in Towards Data Science

·Jul 19

Dockerizing Flask ML Applications

Guide On Deploying ML Models With Flask and Containerizing Your Work — Deploying ML models is an essential step in the ML Lifecycle that’s often overlooked by Data Scientists. Without model deployment/hosting, there is no usage of Machine Learning models in real-world applications. …

Dev Ops

7 min read

Dockerizing Flask ML Applications
Dockerizing Flask ML Applications

Published in Towards Data Science

·Jul 15

Pushing Docker Images to Amazon Elastic Container Registry

Step by Step Guide — Amazon Elastic Container Registry (ECR) is a container image registry that we can use push Docker images to on AWS. Why use a Container Registry? It helps make it easy to manage your various images and separate projects. For example, when I first started working with Docker locally I didn’t…

AWS

4 min read

Pushing Docker Images to Amazon Elastic Container Registry
Pushing Docker Images to Amazon Elastic Container Registry

Published in Towards Data Science

·Jul 14

AWS Lambda Function URLs

Serverless made easier than ever — Traditionally in AWS applications you had an API Gateway fronting your Lambda function. The REST API created by API Gateway would serve as the endpoint to invoke to access your backend Lambda Function. While there’s nothing wrong with this pattern it added some extra grunt work. For starters there were…

AWS

5 min read

AWS Lambda Function URLs
AWS Lambda Function URLs

Published in Towards AWS

·Jul 12

Amazon Rekognition Custom Labels

End-to-End Example Utilizing Rekognition Custom Labels for Image Classification — Outside of Amazon SageMaker, AWS contains a suite of AI/ML services that are tailored for AutoML. In cases that you need to inject ML into your applications and don’t have the theoretical experience or time to build your own models, this set of services proves to be extremely handy. Under…

Machine Learning

6 min read

Amazon Rekognition Custom Labels
Amazon Rekognition Custom Labels

Published in Towards Data Science

·Jun 8

Hosting Models with TF Serving on Docker

Deploy TensorFlow models as REST endpoints — Training a Machine Learning (ML) model is only one step in the ML lifecycle. There’s no purpose to ML if you cannot get a response from your model. You must be able to host your trained model for inference. There’s a variety of hosting/deployment options that can be used for…

Tensor Flow

5 min read

Hosting Models with TF Serving on Docker
Hosting Models with TF Serving on Docker

Published in Towards Data Science

·Apr 25

SageMaker Batch Transform

Generate large offline predictions with an Sklearn example — In my last article I talked about the latest SageMaker Inference option in Serverless Inference. An older, yet equally important option is SageMaker Batch Transform. Sometimes for our Machine Learning models we don’t necessarily need a persistent endpoint. We just have a large set of data and we want inference…

Sagemaker

5 min read

SageMaker Batch Transform
SageMaker Batch Transform

Published in Towards Data Science

·Apr 21

SageMaker Serverless Inference Is Now Generally Available

Exploring The Latest SageMaker Inference Option — I’ve been super excited to write this article. ML Inference is super interesting in itself. Add serverless to it and it becomes that much more interesting! When we talked about sServerless Inference before we had to look at potentially using services such as AWS Lambda.The problem with services such as…

Sagemaker

5 min read

SageMaker Serverless Inference Is Now Generally Available
SageMaker Serverless Inference Is Now Generally Available
Ram Vegiraju

Ram Vegiraju

Passionate about AWS & ML

Following
  • Tim Denning

    Tim Denning

  • Medium Creators

    Medium Creators

  • Aldric Chen

    Aldric Chen

  • Giorgos Myrianthous

    Giorgos Myrianthous

  • Dr Mehmet Yildiz

    Dr Mehmet Yildiz

Help

Status

Writers

Blog

Careers

Privacy

Terms

About

Knowable