-1702383193.jpg)
Senior Software Design Engineer
EvertzSystem Software Engineer, Sys Engg - III
SUSE Software SolutionsApplication Engineer
Amazon Development CenterC++

Python
.png)
Kali Linux
I was very satisfied with Varun's contribution in our department at R&D. Varun is a very fast learner, and his contribution to Virtualization squad was highly appreciated. He responds well to feed-back, and he expresses himself eloquently in both oral or writen forms. He understands the importance of deadlines and milestones, and he takes ownership when needed. Easy to manage, easy to talk with, and always available for collaboration.
https://github.com/varunkojha/os-autoinst-distri-opensuse
goLang
Usage example below
go filename.go
go run 01_dataTypes.go
https://hackweek.opensuse.org/23/projects/study-the-book-of-the-go-programming-language
In practical terms, Avahi enables devices to assign themselves network addresses and announce their services, making it easier for users to discover and connect to these services without requiring them to configure IP addresses or know the network topology. It's widely used in Linux and other Unix-like operating systems to provide zero-configuration networking.
https://avahi.org/
Run in SUSE Linux OpenSUSE Tumbleweed
The files were successfully installed on my system vojha@localhost:/usr/include> cd avahi- avahi-client/ avahi-common/ avahi-compat-libdns_sd/ avahi-core/ avahi-libevent/
I used sudo zypper install avahi-compat-libdns_sd-devel run gcc -o avahi_example avahi_example.c -lavahi-client -lavahi-common ./avahi_example output vojha@localhost:~/varun_codes/learn_C/avahi> avahi-browse -a
Projects: E-Funnel & Smart DG, an IoT Device that offers a complete monitoring solution of Diesel Generator.
Taking data from PCC controller, Efunnel and other parameters from DG set, processing, sending data to Servers
hosted on Azure. Taking care of Security Compliances of the Device, handshake and communication with Server
using SSL TLSv1.2 certificate, Developing the Server, managing Database, Creating Cron Jobs and system service
routines. Data Analysis: Maintaining logs as text files at device and server, reading and writing data as JSON at
Database, Loading these text files and creating xlsx, CSVs and PDF reports fully automated.
Tech: C, Python, Shell Script, Azure, MongoDB, Protocols: TCP/IP, MQTT, SPI, I2C, UART, RS485. Team size: 10
Okay. Could you please understand more about your background by giving a brief introduction of yourself? Hi. My name is Varun. I have 8 years of experience working in the tech. Sorry for that. So, Yeah. So I have 8 years of experience working in the tech. I have worked on multiple Linux distributions starting, with Fedora 16/18, and then I moved to Ubuntu based distributions. And then, uh, I have worked on RHL as well. And My current talk last talk was sleazeless, open sooset, tumbleweed, and other, Uh, related tech stack. Uh, Yeah. So I have, Mostly, uh, working experience in CC plus plus Python and, uh, related tech stack. I have also deployed my skills successfully, uh, on the reconcileing the test care bridges. And In my current org, I am taking full lot of, uh, full responsibility of the virtualization road map with the automation of the Open QA test data written in Perl. It's an open source auto inst distro where we test the operating system and rolling releases of the. And before this, during my time at Amazon, I was working with, Uh, Java back end, uh, for some time. Mostly, my workings involved Python, the related data analysis task And ETL tasks, extract transformation and load from, uh, and doing workings on the customer behavior dataset. Uh, mostly working on pulling the data from data lake to dump it into the s three buckets. Yeah. Um, and, um, before that, I was working with the Erickson's account For network telecom stack, where it was a reusable tracing software based on component based architecture where you can pull out and pull in a component depending on the requirements of, debugging a live side note, which was clustered. I've also experienced on the high availability and disaster recovery product, which is called HP service card, where I performed the virtualization. I have exposure to Qumu KVM, Hyper v and VMVAS, VMotion, and Vsphere to test the failures and failback of A system or a cluster or a node. Yeah. That's pretty much so far so good. Thanks.
Give a fluent a frequent need to update test cases, how would you leverage it to manage changes while working in the continuous delivery environment. So yeah. Um, in my current talk, I was doing the same thing. We would continuously daily enhance our test coverages based on the rolling releases of the feature added or removed. So it's important it becomes really important to, uh, use it to that because the open source distribution of that, there will be a number of peoples who will be committing to it daily. And it's important to pull out the, uh, updated or previous your code continuously. And then When you are working on a feature or a test coverage to cover certain area or maybe say in my scenario, I was doing guest installation and, uh, testing the virtual network test. So the has been removed. You don't need a lever d. There is model lever now, which needs to be restarted, uh, when a guest is installed on the post. So yeah. So in that scenario, my I had to restart my test coverages and making some changes. So it is important for me to update my in the repository, pull out the latest updated code so that changes are reflected, and then commit my changes, uh, wherever the changes are needed in modifying the test in this particular scenario. I did that. And then, uh, approve my PR changes, uh, get it merged, and then again test it with the changes and keep monitoring the metrics, how the performance went. So it is important for continuous delivery. And git, I have seen in my org, uh, the basic unit tests, basic feature tests, would be testing, and they will be part of the, uh, they'll be part of the continuous integration and deployment in the pipeline. So whenever you push your changes, An automated, uh, test script would trigger, which will check the integration, which will check the latest software, Like, in say, for example, Perl latest version of the code, Python latest version, it will pull it will test if you have syntactical errors or something like that, basic unit testing. So yeah. That is important. So frequent need to update test cases. Should we and it becomes very easy to manage with
What strategy would you use to automate regression testing software changes in the that modified TCP IP communication in Python based on test. Um, so I've seen that, uh, when I was working on a client server architecture, uh, we would do socket programming using Python, and there was a handshaking with the client and the devices. So client and server, basically. So the test enrollments were testing aggressively. First, if there is a proper handshake and one to 1 communication established with the server IP on a particular port. After that, uh, the successful handshake would happen, then only devices are allowed to exchange data. And there are n number of devices sending data to the server on the specific IP. So it is important to maintain that regression testing, and it is important to do stress testing also, uh, putting the server on multiple parallel clients because we need to test whether our server was able to handle the synchronization properly, whether the, uh, resources were available or not. When did your fork got over. Like, 2 is to power 32 child processes can be invoked. So those kind of regression testing should be done. And there was also an metric, uh, that we developed after that on CloudWatch to see how many, uh, successful datas were pushed on to the cloud and how many were unsuccessful. Uh, and there was a metric count kept on using CloudWatch. So this ensured our software changes over the DCP IP. Um, Yeah. And we did everything in Python, um, like I mentioned. I used, there was PyMongo for the databases. There was, uh, thread import thread from threading import thread, and then we would invoke different different, uh, inputs based scenarios. And there were multiple instances where what happens if this is on, this is off. Those kind of regression testing was done. Yeah.
How might you employ Jenkins for nightly build? Testing of Python application and result metrics are most critical. So I have seen this in my past experience. So when I was working with the c plus plus telecom network stack, we had our core tracing software written in c plus plus, And the testing automation was done using Python. So a Python script would, uh, be running using Jenkins CICD pipeline. The very first thing that it will do, uh, it will pull the latest artifact image, uh, from the artifact repository and then spawn a container on the base image using Docker container. Then these Jenkins pipelines, with the help of Python script, will run, uh, build of that tracing software on the middleware stack, and it will build a stack with our tracing software. So an RPM package will be created. Then that RPM package will be installed on that container that was spawned. After that, uh, that Python application will have multiple testings. For the build scenario, obviously, if there is a, uh, build failed or not, you know, testing was done properly or not. Then then there was integration of Gtest for that reusable software. After all those successful builds, The test would run, uh, other, uh, things like code checkers and static analysis of the code will be done. Then it will find finally, After installing the cluster, the cluster will go live on the container. Then this Python script, uh, which is running a Jenkins job, will, uh, will test the activation or spawning of an application on that, uh, node that has been created. Failover and failbacks will be tested. After all that is done, there will be a metric generated. Uh, so this is how it was employed. After that metric, how many tests Was successful? How many feature tests failed? How many static analysis failed? Was there a code dump or not? Where is the score down file setting? All that results will be thrown on the console, and our text file will be generated. After that, The text file would be uploaded uploaded using HCP command on the artifact. And then finally, container would be destroyed. So, yeah, That was the implication or the full Jenkins CICD job during a nightly build. Uh, we were doing night train testing for. I've seen nightly builds. What they mean? They would be doing and number of Jenkins jobs. They will be also using J units to upload the metrics and results that I've seen.
Explain how you would use Git and Python scripts to automate rollback test. Environment to a stable state after failed test execution. Uh, rollback? See, in most of the scenarios, we never Have a rollback kind of scenario. Uh, what we practice generally is you raise your PR. You test every scenario. And before merging your PR. When your PR is merged, you obviously make sure that, You can see those changes in effect, and Your stable system state is maintained. Uh, there's nothing wrong. But in scenarios where, The feature that you deployed is not working properly as it was thought of being working as it was expected to work. So we can write a Python script, and use git commands to automatically roll back the last commit or you just pass the commit ID, uh, to a function which is defined in Python. And the Python can literally make use of OS library or Sys library to execute shell commands directly on the console. So get All the Git commands which are used to roll back the latest commit with that commit ID. So a Python script can expect a Commit ID to roll it back if the test environment failed. So in those scenarios, we have a prod. We have dev environment. So after the, uh, your production level code is deployed on the dev environment and it is successfully Working for at least 2, 3 days, then, uh, this thing happens. It will deploy things on prod. So, generally, we do that. We check which commit, uh, made, uh, that impact that broke our system, then we can definitely run that Python script, passing that commit ID of that Person who had committed and ask him to just pass on your git commit ID in that Python script, and, certainly, it will roll back. So, yeah, this can be done, but maybe you can also Automate this using Cronjobs. Uh, I can write a script, uh, where I check on the system or on the cloud? How many which commits after a week? And I can schedule that cron job to run that Python script, which will pull the commits failed commits or failed system and pull out that commits and pass it in an array to that Python script and, Uh, remove that and roll back to the last working state. This we can do. And if there are no failures or the system is stable after the commits that have been pushed recently, then, yeah, the Python script will do nothing. It will, Uh, generate a log message and upload on the artifact that, yes, the system is stable. There was no failure after department on the prod. Yeah. That's very much. Thank you.
For a script version control in Git, What workflow would you implement to ensure efficient collaboration across a team of test engineers? What works, Lohu, do you implement? See, like I mentioned, uh, we raise a PR. So a pull request to the latest code version, we would Parallelly work on our PR even if we are working on the same branch. And we would make sure when we are merging our, uh, commits after it is approved by 2, 3 engineers, uh, across the test teams. Definitely, there will be a QA QE assurance and quality. Uh, everything will collaborate. Like, there was an example of nightly build. So there are 2, 3 builds. Nightly build might be with the QA teams, and test engineers. So that PR would be tested, approved with the test scenarios. Then only it will be merged. And, yeah, a test script A test script would control it in git, um, so you know which commits. You can see Git claim, check what is not working, what is not working, who made the latest changes, which is not in line with Current implicate implications of the features. So, yeah, cross functional team workings, I have experience. I have worked with multiple teams Working on the same open source repository. We can collaborate. Yes. The workflow is simple. Uh, we have a PR and an MR requirements, and we make sure that we communicate properly. And the test scripts would have version updates like everyone do. So there will be a weekly bill and kernel of the month, for an example. In rolling releases of the tumbleweed, we have latest build, say, 4 dot x, and then the version updates whether it is on, but install, which is virtual installation or para virtualization installations, Testing those kind of things of the operating system. This I've done recently. Yeah.
Uh, in the following robot framework test case snippet, identify why the test might provide false negative, and explain how would how we would adjust it to accurately reflect the test scenarios. So there is a test user login input test. Login field username, input password, password field, and password. Okay. Submit credentials. Page should not contain login field. I think, uh, the problem here With the false negative scenarios would be, page should not contain login failed. So when username or password is incorrect, It will say login has been failed and the login field and password field should be accurately filled. So We could accurately adjust it using What do you say? A default login and default password. Which would. Rather than doing an empty values, null values, login field and password field will be filled with some dummy values with the default when the test is. And if those dummy values are not overwritten, dollar username was not correctly passed because dollar stacks the value of the username from that variable. If that was not done, then definitely We can avoid these false negatives.
Review this Python snippet. Explain why the code might fail when retrieving the user object and how can you debug it to handle Potential exceptions. Try user user dot get user ID. Accept key Error. Print user not found. Print user user's name. So yeah. Um, I can Pass a second argument of users.get, comma, user ID, comma, none. So in the exception block, I can I can actually check if the user value is None? Then you say that user was not found because it's adjacent part. Uh, it's a dictionary. So you can work on this part. You if there is no value from the key. Rather than throwing a key error, give a default name. None. So if none was found, that means user is not found. That means the field was empty. It can happen because, see, uh, empty string can be depicted as none or Not there. And there would be a scenario when the name is Not there in the key section of that user, JSON, or the dictionary. So, yeah, we can have That checks so that our code does not crash, and we should write it properly rather than rather than stopping our code at the error. We should handle the errors effectively. So, yeah, we can fix this.
If you have to implement robotic framework library capable of interacting with Simulink model. See, I don't know what is Simulink model. My resume clearly states that the bot framework experiences not there in there? What would be the your key consideration? Generally, for any of the framework, If you want to implement, the ideal scenario would be input test if the input data is valid. If they are valid, then only proceed with the testing. Get your result. Log your errors, build a metric on it, and then finally upload the results upload the results. So, yeah, I'm not sure about sim Simulink, but I can learn and read this model. Thank you.
Let's check how would you use Jenkins to implement continuous testing for an application required. Frequent synchronization in MATLAB models. I've never worked with MATLAB models. Uh, Jenkins, I have seen for continuous integration and deployment. I have already answered, uh, the Docker container example when I was talking about the employment of implication and using Jenkins CICD, how I have used in my past.
What considerations should be made when managing and committing Python scripts to Git repository for high frequency trading application? Um, it depends on the application. There could be certain rules across the team that they can have, like, the, Say for a payments team or a payment transaction application that a team is building, they can have their own set of rules that will not deploy our code on Fridays, Or we would only deploy it on Sunday so that by Monday, our system is ready for an Amazon Pay application, say, for example. So teams would have their own depending on the market and the business. What consideration should be made for managing and committing Python scripts? See, general condition consideration would be there, definitely, of how you write an application k, and how you commit on the Git. 2 or 3 people should not be committing on the same day. Those 3 kind of rules we can build across the team, And that's what is Agile and, uh, Jira has been made for. You know whether the system is busy or not because it's we cannot have our system down, failover, and, uh, disaster management or recovery should be taken into consideration. So, yeah, The team can have its own set of rules.