
Highly motivated Data Analyst with expertise in Data Science, Machine Learning, and Python. Experienced in data analysis, data processing, and data modeling. Skilled in using Python for ML engineering and data visualization. Seeking an opportunity to utilize my skills and contribute to the growth of a forward-thinking company.
Machine Learning Engineer
Siril Technologies Pvt. LtdML Engineer
Siril Technologies Pvt Ltd
Python

Git

Javascript

AWS Cloud
C++

Java

HTML, CSS and JavaScript

MySQL

GCP
.png)
Docker

Kubernetes

Spark

Hadoop

Hive
Azure

AWS
Working with Anvitha Mallela has been an absolute pleasure. Their expertise in machine learning and dedication to delivering high-quality solutions have consistently impressed me. They have a knack for tackling complex problems with innovative approaches and have played a crucial role in driving our machine learning initiatives forward. I highly recommend Anvitha Mallela for any project requiring a skilled and experienced machine learning engineer.
During my tenure as a Machine Learning Engineer at Siril Technologies Pvt Ltd, I spearheaded a project focused on developing a predictive maintenance solution for industrial equipment. The goal was to leverage machine learning algorithms to anticipate equipment failures and minimize costly downtime for our manufacturing clients.
This project is to automatically recognize human actions based on analysis of the body landmarks using
pose estimation.
The project is to build a Machine Learning Model to predict whether an owner will initiate an auto insurance
claim in the next year.
The project is to build an intelligent conversational chatbot, Riki, that can understand complex queries
from the user and intelligently respond
Yes. Uh, my name is Ram Wabu, and I'm currently seeking a new job opportunity. I'm head of Hyderabad, and I completed my bachelor of technology in computer science and engineering from CMR Engineering College. Affiliated with UG and DU Hyderabad, I began my professional journey as a ML engineer. It's a really technology prototype, and concurrently, I pursued a distance to post evaluation program in data science from NIT Warangal and successfully completed. It is April of 2023. Uh, hands on experience in data structures and, uh, algorithms and Python programming and c plus plus and the statistic analysis and machine learning and AWS, Amazon Web Services, and data visualization and prediction, modeling, and data processing and data mining algorithm to solve the challenging business problems. With the over the 3 years of experience, I have the proficient in, uh, AWS and, uh, Python data manipulation using the libraries such as, um, py, and sci fi, and pandas. Alongside has a deep deep understanding of machine learning and, uh, uh, deep learning applications and including the computer vision and the recommendation system, uh, and then as to language processing, my coding code pros enable me to produce a clean and efficient code and facilitating the seamless of the interaction with the structured and semi structured and unstructured data using Python, Orange, Spark, and SQL, and, uh, AWS, and, uh, big data tools. Um, over the past 3 years, so at SIRLI Technologies, I have actually contribute to solve the client problem, solving the client problems, and the military relation ships and assisting the businesses in overcoming challenges have been the integral part of my role. Now I'm eager to advance my career, and I have to apply for this opportunity as I believe it's my I confident that my 3 years of expectation and education background education background and enthusiasm for the industry where you make me a valuable person to your team. Uh, my strength of strengths are being a quick learner and then a to team player, um, team player. In short term, I am, uh, to secure a position in reputable organization to enhance my skills and to gain me valuable experience looking ahead. And my long term goal is to be excellent in the best position within my field. And, uh, out of work, enjoying reading books and listening music and staying updating on the latest trends through the Internet surfing. In terms of my family, we are the 4 members, including my parents and, uh, ancestor. I'm eager to prove my capabilities and, uh, contribute to the success of your team, And that's all about me. Uh, that's it.
Okay. Okay. When designing any API gateways for a Java back end to manage the traffic, uh, speaks and, uh, effectively, it is essential, uh, essential to the consider the several best practice to the ensuring the scalability and reliabilities and performance. The key practices and, like, uh, load balancing and catching and, uh, rate limiting and, uh, circuit, uh, back pattern, like, uh, horizontal scaling and monitoring and the logging and the auto scaling policies. The these are the best practice. Uh, I can design in, uh, API gateways for the Java back end that effectively manage manage, um, traffic and maintaining the highly available and, uh, deliver the optional performance for your application, uh, our our application. Like, um, first, uh, first thing as a load balancing, the implementing load balancing mechanism to distribute, uh, incoming traffic, uh, evenly across, uh, multiple back end servers. These help to prevent the overloading of the individual servers and during the traffic specs. And the next thing is the catching. Utilize the catching mechanism within the APIs and the gateway, uh, to catching the frequent sending data and the represent, uh, response to the catching can be help, uh, reducing the load and back end servers and several catches response to the client and especially for the read heavily workload. And the rate limit uh, means implementing the rating limitation policies to control the rate of the incoming requests from the client. The rate limit help to prevent the abuses and ensure the fair, uh, usage of the resources and protect to the back end servers from being the overwhelming during the traffic's pain. Like, uh, circuit break patterns is supplied by circuit break patterns to detect and handle the failures, downstream and services of the back servers. The, uh, circuits, breakers, patterns have to the prevent cascading the failures and all those guy by gateways and degrading this fully, degrading the functionalities, uh, when necessary. Now origin horizontal scaling, they're designing the API gateways for, uh, horizontal scaling scalability by developing the multiple, um, instances across the multiple server server containing the original scaling allows the API gateway to handle the increasing the traffic, uh, lower by that.
Hold on. Integrating AWS. Okay. Integrating AWS RDS and relational database server, uh, service, uh, within a a Java Java application to ensure high available, uh, and performing performance involves several best best practices and consider considerations, uh, like, uh, different step guidance, uh, to achieve these. Uh, choose the right, uh, RDS, uh, instances type and, um, like, a multiple agent, uh, agent deployment for the high availability and use the read, uh, replicate replicate, uh, replicas for the scalabilities and optimizing the data configure configurations and, uh, connection the polling and, uh, secure data, best access, and monitor and optimizing optimize the performance and backup and restore. Uh, like these, uh, best practice, uh, I can then integrate the AWS RDS with our Java application to achieve the high availability and scalability and, uh, performing and ensure the reliability and efficient data, um, basis back end for our application. Like like, uh, choosing the right, uh, RDS instance who type, like, select RDS and audience issues type that meet that, uh, performance, scalability, requirement for the requirement of, uh, the your, uh, our Java applications consider the factors such as, uh, CPU, and memory storage and capability input of performance and the multiple agent, uh, deployment for highly availability. Uh, enable the multiple agent availability zone, like a deployment for the r d a RDS instances of the 2 the highly availability and for tolerance and use RAID, uh, RAID replicas for the scalability, implementing the RAID, uh, replicas for the read heavy workload, and, uh, offload and read the traffic from the, uh, primary data, databases instances. The optimizing databases configure configurations. Right? Like, configure the databases and parameters and settings based based on the workload and the characteristics and performance, uh, requirement for requirements of your, uh, Java applications, and connection pulling, like, implementing the connection pulling in our Java applications to efficiently managing the data database connections and minimizing the connections, like secured secure and the data database access to implementing the IAM and identity and access management is authentication to securely managing the access
Which are the mails. Okay. To optimize optimize HR based on the microservices service architecture for, uh, performance and scalability on AWS. Can leveraging the combination of AWS and services services and that offer a scalability and reliability and the performance tuning the capability. Uh, the list of the list of key AWS, uh, services to consider it. Like, uh, Amazon, uh, e, uh, e, e, e, e, e, e, e, e, e, e, e, e, e, e, e, e, e, e, e, e, e, e, e, e, e, e, e, e, e, e, e, e, e, e, e, e, e, e, e, e, e, e, e, e, e, e, e, e, e, e, e, e, e, e, e, elastic, container service, or the Amazon k, uh, e k e k s, and the elastic Kubernet service. Use Amazon ECS, like, uh, Amazon, uh, e c 2, the elastic compute cloud, like Amazon, uh, RDS, like, uh, relational database services or Amazon, uh, Aurora. Like Amazon, uh, Elastic Catch and, um, like, Amazon, uh, SKS Simple view services or Amazon, uh, SMS and, uh, simple notification services, like, uh, Amazon API Gateways and, uh, Amazon, uh, Lambda or, like, Amazon CloudFront CloudFront and Amazon, uh, Dyn DynamoDB. Like, uh, these, uh, uh, database services can optimize the, uh, our Java based micro or micro services and our architecture of performance and scalability And the reliability ensure the same solution operations and efficient resources and utilizing in that dynamic high traffic and enrollment, like, uh, Amazon, um, Amazon elastic containing the services and Amazon, uh, Kubernet services, like, um, they're using the EACS and Amazon AEKS to them. Uh, altered, uh, and manage the container for the deploying the microservices. And Amazon, uh, compute, um, Amazon Elastic Compute Cloud is utilizing the Amazon, uh, EC 2 instances to the, uh, run the Java based on microservices within the contain containers for the virtual machine. Amazon already sent relational database. Database services or the Amazon Aurora, like using the Amazon Aurora or Amazon Aurora as for the the relations. There is no databases to the store. And, uh, Amazon LST Catch and LST Catch is, um, employ employed as Amazon LST Catch to the cache, the frequent access to the data and prove the performance of the microservices and Amazon SKS and the simple QN services.
Okay. Pipeline develop pipeline. Okay. Like, uh, to leveraging the AWS cloud for me formations for the automating the, um, provision provisioning of the DevOps pipeline, you can, uh, I can follow these steps, like, uh, defining the infrastructure as code, uh, IAS, uh, like, um, I c, uh, I a c, and creating the cloud, uh, format formation stack and, uh, different pipeline components and, um, like, and, uh, uh, paramet parametrize the template and define the IAM and permission like, uh, review reviews and deploy and continuous integration and the deployment and the CICD, like, uh, update and the maintenance. Um, these steps, uh, I can effectively automate the 3 v, uh, provisioning of the dev ops pipeline using the AWS cloud formatting and enabling the seamless and infrastructure deployed to the continuous integration and deployed of a software application Like, uh, and defining the infrastructure as a code is a defining word and the ops pipeline infrastructure as the code using the AWS cloud formation template. Write them in JSON format, like, create a cloud formation stack. So creating cloud formation stack using the AWS management console and the AWS CLA for the s s d case and AP AP case. API case and define the pipelines and components. Define the components, uh, of your DevOps pipeline within the CloudFormation templates such as source control and, uh, build serve meet server and deployment, like testing, monitoring. These are, like, uh, parameterized, so the template is a parameterized. Uh, I clouded the formation template and to make it, uh, flexible and, uh, reusable across the different, uh, enrollment and projects. Like, defining the, uh, m m permissions, like, if an m permission roles and permission required for the each component of the DevOps pipelines to the extract to interact with the AWS services. Reviews and required. Uh, it's a reviews and that's it.
Okay. To securely managing the, uh, credential for the Lambdas and functions and, uh, triggered through the APS, gateways, and AWS. Uh, I can use the AWS identity and access the management and, uh, access management, add identity, I am role and roles and, uh, policies, uh, along with, uh, uh, the AWS services. Like, I can do it, uh, like, aim. I aim polls for the Lambda execution, like API gateway's authorization and enrollment variables and AWS system secret secret manager and encryption at the reset and, uh, and transist, like, uh, audit and logging and monitoring. Uh, like, uh, these best practice, I can securely managing the credential for a Lambda's functions and triggered through the API gateway's AWS and ensure the compelling compliance with security standard standards and protecting the sensitive data from authorization access or, uh, exposure. Like, first, uh, enroll for the, um, Lambda execution, the creating, uh, role with the necessary permissions for, uh, your or I am our our Lambda functions to the access, the AWS resources such as database and the s three buckets or other AWS services required by the my function. Like, API gateways and authorization can configure the API gateways to the require, uh, require the authorization for the, uh, you know, working the Lambda's functions here. Uh, I can choose for various authorization mechanisms supporting by the API gateway in such as the enroll Lambdas and the the enrollment variable use, uh, enrollment variables to secure secure your stored and send since due information such as a pay keys, database, passwords, or other credentials as required by you on the, um, my lambda function. The AWS system manage manager the parameter stored and store the still parameters. Um, uh, securely in in in the AWS system and the man manage man manager, uh, like, parameter store parameter store and which pro provides centralized and secured, uh, solutions and store the configured data and, uh, sick secrets and AWS, uh, AWS Secret Manager for more advanced secret management.
Okay. Uh, the SQL Square Smith. They provide a and, uh, whenever we do the SQL injection attack, and it does not follow the best practice for security and performance. Here, any, uh, analysis of the issue and the recommendations for the improvements, like, uh, scale injection, vulnerability, and, uh, plain text password storage, and, uh, use of selector, um, uh, select and incorrect in completion, um, comprehension and performance consideration and secure parameterization and the proper password handling and the column level security. Like, in this, uh, in provision, the, uh, query use the parameterizer placeholder instance, uh, like, uh, question mark is, uh, instant of directive concrete conquering the variables and prevent the SQL injection attacks and only the necessary columns, uh, user ID and user name and email or selected passwords or security. I should install in them in the database. Like, first thing, the SQL injection vulnerable to this query. Concurrent concurrent concurrent concatenates. Okay. The the query, you can can can get another username and the passwords, uh, variable that are into the SQL string without any invalidation or parameterized. These make those query and, uh, like, uh, 2 SQL injection attacks where any attackers could be maintaining the important parameters to the Azure to the attribute SQL and commands, like plain text password storage. See, the story storing the passwords in plain text format in the significantly secured risk. Risk. The instead of the password, it should be the the secure hash using store, the, uh, cryptographic hashing algorithms such as, um, bycrypto and, uh, SHARP 256 and SHARP 256 before the storing them in the database. The use of the selected the selecting all columns and selecting the many fetch and more the data necessary leading to the increased network and overhead, potentially, exposing the sensitive information and incorrect, um, compression. The password is equal to, like, uh, password the part of the head where, uh, flashes and seems incorrect. It is appears to the attempting to the compared to password and then call them with the h two.
How do Okay. Okay. The, uh, solid principle or asset of designing the principle that the prompt, the main maintainable, uh, and scalable software architecture. Uh, we'll write the lambda functions for the serverless applications and, uh, adhering adding to these, uh, principles can help to secure and ensure the maintainability and extensibility. I can apply each of the solid principle and the single responsibility principle and, uh, SRS, uh, SRP, like, uh, open and close principle for OCP. And Let's go substitute substitution principle and, uh, let's be, uh, like, inter interface, uh, segregation principle and ISP, like, uh, dependency, uh, inversion principle inversion principle is RD IP. Like, um, like, I I apply the solid of principle when writing the lambda functions for the serverless applications, and I can create a well structured and model and, uh, maintain the code That is easy easier to understand the extent extend and, uh, maintain the over time, uh, over time, like, single responsibility principle, I can show the that, uh, each lambda function has single responsibility. Our purpose such as handling the specific type of event or the, uh, performing the the, uh, performing the, uh, distinct task and open and close principle like designing lambda function to the open for the extension, but closing closer for the modifications. And, uh, the user even drive and are unable to adding the new functionalities and of it just to the obligations without, uh, modifying extension lambda functions. Like, let's go with the substitution principles and ensure that Lambda functions and, uh, add to the contract and defending whether even to also order trigger there are designing to handle. And, uh, info interface and, uh, segregation principle and interface segregation principle, like, depending the clear and the constants in interfaces for the lambda functions that expose the only method, uh, operation that relevant to their purpose and, uh, dependency inverse principle like, uh, designing a lambda function depending on the obstruction rather than the consent and implementation user dependency injection and the, uh, inversion of the control I was seeing by containing the injection dependency to into the Lambda's function at the runtime. Write the Lambda function that are decoupled for us a specific
Very hot. Then designing a cloud formation template to the provision, a robust network infrastructure for a Java based, um, Java based software as a service solutions. And on, uh, AWS, I can implement the several designs and patterns to the ensure the scalability, uh, reliability, and the security security. Like, your designing patterns, I can consider the VPC, the virtual private cloud is part of enrollment and, uh, MyIT, agent, uh, deployment and, uh, elastic load balancing and, uh, autos scaling grouping and private link and, uh, our route, uh, 53 DNA routing routing security groups, uh, and the network, uh, ACLs, like gateway, VPC, and point for the s c history, and the transistor gateways by these, uh, designing patterns into into my cloud form formation template. I can provision a robust networking infrastructure for our Java based SaaS solutions on AWS and ensure the scalability and reliability and security. So first of all, is a VPC, like, the virtual private cloud is implementing a separate the p v VPC for each enrollment and, uh, development and, uh, standing and the production, like, uh, to the isolate the, uh, resources and prevent the cross enrollment and interference and, uh, like, multiple agent deployment, uh, deploy the resources, uh, such as a is like, EC 2 instances and our RDS and relational databases, databases, and, uh, NAT gateways across the multiple availability zone, uh, for the highly available and, uh, fault tolerance and, like, uh, plastic load balancing, like, implementing any applications, load balance array, ALB, the network load balance around the to the distribute the incoming traffic across the multiple e c three two. The instances in different, uh, areas like, uh, auto scale scaling group. Unlike defending the auto scaling group for the EC 2 and instances and running the Java based applications and services to the automatically scale the fill, um, fit based on the traffic there and demand. Private link, use the AWS private link, uh, to to securely expose the Java basis as APIs and or the services, uh, privately within the VPC without exposing the public Internet, like, uh, route, uh, 50 3 DME routing, use the Amazon routing 53 to manage the DME routing, the SaaS solutions, like, including the
Good to have a service. Okay. I would like to ensure the scalability and reliability of the server web application during the peak use on the AWS. I can leveraging the several database services and tools. I can use them effectively, like, uh, Amazon, uh, e c easy to auto scaling, and Amazon already has multi and Amazon elastic cache and Amazon Cloud format CloudFront, and Amazon Road 53, and Amazon Lambda with the API gateway, like, uh, AWS, uh, CloudWatch, and AWS CloudFormation. Uh, by these leveraging these AWS services and the tools, I can ensure the scalability and reliability of, uh, my Java web application during the peak usage and providing the same to us, uh, responsive experience for the for, uh, my user while maintaining the cost efficient and operational excellence. Like, Amazon, uh, Amazon is it to auto scaling, like, set up a auto scaling group, uh, to the automatically adjust to the member of the e c two instance hosting the u my Java application based on traffic demand and, uh, configure the scalable policy and the scale or scales out there adding them. Our scales and remove the instances dynamically, response to the change in CPUs, and utilizing the request and comment about their customer metric, like Amazon, RDS, my data, AZ deployment, and, uh, deploy, uh, my application data using Amazon RDS with multiple AZ. The deployment for the highly availability and for the tolerance and Amazon, uh, elastic catch and use Amazon elastic catch to the catch frequency. Sending data, improve the performance of, uh, my Java web application. Amazon Cloud Cloud CloudFront is a distribute, uh, my Java application content globally, improve the let us see for the end user by using the Amazon Cloud for upfront as a content delivery network. CDCDM. Like, Amazon Route 53 is a use. Amazon, uh, Route 53 to manage the DNS routing, and implementing the intelligence traffic routing and the strategic for you. My job application, AWS Lambda with the IP gateways offloading the compute Intensive of the stateless stake, uh, stateless task for, um, my job applications.
Okay. Uh, to leveraging the AWS code, we need for the automatic deal, testing the build, uh, of a Java based or the microservices, I can follow on these steps, uh, set up the core build project and, uh, defining the build specifications and install the dependencies, like, uh, comp compiles and testing and, uh, compile and testing and package the application and generate the, uh, art artifacts and trigger build in the pipeline and the view build result. Uh, by these, uh, by following these steps, uh, I can effectively automate the testing and the building, um, my job based microservice using the AWS code built and enable to fast and the reliability delivery of the code changes and while maintaining code quality and consistency. Like, uh, set up the code building project or creating the code build project in the AWS and management console using the AWS CLI or the SDK. The specific the source rep the response, uh, repository, uh, where your, uh, Java based microservices code is stored. And, um, like, uh, example, AWS core commit and GitHub and Bitbucket They're defining the build specifications and creating the build specification via my file in the route directly of your, uh, my Java microservices project to define the build step and the command for the code build. Um, like, defining the com commands to the install dependency and, uh, compile the Java code around testing and packaging the obligation and generating the artifact. I like to install the dependency, use the install, the face of the build on build space, um, or ML file, like, uh, to the install any requirement dependency to build to the tool for the Java Micron Services for, uh, come like, and the next, uh, the compile and testing the in the interface and compiling the Java's code source code and around the unit test to ensure that quality and functionality and package impact package the application. That is the compiled the Java microservices into the and deployed and tracked such as Jira files or a docker images, uh, like, uh, generate art track in this post building phase, specific any additional actions to the performance of them after the build process, such as a copy, copying copying copying, uh, use, uh, like, generative or fact in post build to create and specific.