
I am a Senior Consultant, Application Developer at Thoughtworks with more than 7 years of experience in building highly efficient and scalable applications using agile methodologies, clean coding and best practices for large enterprises. I develop applications using micro-services and event driven architecture, consult companies on OO Design, patterns, testing techniques and development methodologies. Passionate about XP and agile
Technical Lead
Avalara TechnologiesSenior Software Engineer
ThoughtworksSenior Research Engineer
Hyundai Mobis
PostgreSQL

AWS (Amazon Web Services)

Java

Apache Kafka

Azure Cosmos DB
.png)
Docker

Java 11

Java 8

Spring Cloud

Spring Boot

Azure Pipelines

GitHub

IntelliJ IDEA

Azure Active Directory

New Relic
.jpg)
Grafana

Splunk
Cucumber
.png)
Jenkins
REST API
.jpg)
Web API

AWS

Azure

MySQL

Restful API

Kafka

Kubernetes

Gradle

Maven

Postgres

MongoDB

CosmosDB

Nomad

Rio

Amazon EKS

PCF

Maven

Azure DevOps

Azure Cloud

AWS Cloud

Consul
.png)
Datadog

Github
Yeah. Hi, all. Uh, I'm Akash. I'm having nine plus years of professional experience. I'm working as a senior software engineer at Avaya Technologies, and previously I was working with Okta as a senior consultant. And I'm a primarily a back end engineer, uh, who who likes to work on, uh, Java related technologies. And, uh, I've been working on domains like ecommerce, banking, health care, etcetera.
Sure. The so the difference between primary key and unique key is that, uh, primary keys all the, uh, primary keys are unique keys, but all unique keys are not primary key because the purpose of primary key is to, uh, whenever new records are inserted in a table, uh, the primary key is something which uniquely identifies the record, whereas, uh, a unique key, something is is is a is a key which can be used, uh, which which are which is unique across all the records. So, basically, all the primary keys are sort of unique keys, but all the unique keys are not primary keys. And primary keys can be auto generated or manually generated during code, uh, during insertion, uh, whereas unique keys have to be always generated from the user side.
Sure. Uh, both arrays and stacks are data structures, basically, uh, which store the data in a linear fashion. But the difference is arrays are, uh, something which which you can access the elements via indexes. They are stored contigiously. I mean, uh, they're stored in a contiguous memory, and, uh, they have o one access time. Once you create a array, uh, initialize an array, you cannot change its size. Whereas the stack is something which is an abstract data structure. Uh, they are implemented over other primitive data structures. Their size can grow after initialization. And, uh, they store data linearly, but the access way of accessing is different. Uh, so, basically, in arrays, you can either access any of the element, but whereas in a stack, you cannot access any random element. You can only access the top of the element. And, uh, the first they they follow FIFO structure, uh, LIFO structure, basically. So the last inserted element can be only extracted out. Uh, so it's basically last in, first out. And, uh, they serve from different purposes, basically. Uh, in what scenario? Basically, whenever you want to store the state of, uh, any events, you basically push in the stack. For example, if you visit a website and, uh, you have a track of uh, web pages, then basically stacks are used to store when you go back and forward from the web pages, their their order is stored in a stack. So so the last page which you have visited, if you press the back button, you go to the previous page from where you came. So you cannot access the random pages when you use forward and back buttons. But whereas, uh, some array is something where you just store, let's say, list of records or list of objects, and you just wanted to access it randomly in one time. So they both serve different purposes, and stacks as in being an abstract data structure can be implemented over arrays, lists, etcetera.
So the difference between Java seven and twenty one and twenty two are a lot. Basically, Java seven being the oldest stable Java version. And, uh, after that, there are lot lot of many versions, uh, LTE versions introduced. And, uh, the in older Java seven versions, uh, there were no functional style of programming. Uh, they were plain, um, the objects and the collections were used were simple collections. Uh, there were they were not using lambdas and streams. So a lot of memory improvement happened after Java seven. Uh, they were, like, in latest versions, like, 2122, they also use, um, heavily functional style of programming. They have, uh, like, they're not, uh, their new, uh, wrapper, uh, wrapper type of wrapper data structure introduced like variables, vars, where you don't need to declare the type of the variable. Also, from memory point of view, there are a lot of JVM improvements, uh, AOT and JIT compiler improvements, uh, lot of memory model improvements. Um, earlier in Java seven, you did not had virtual threads. So now in Java twenty one twenty two, you have virtual threads which help you, uh, but efficiently manage memory and resources in multi programming or multithreading. And, also, in, uh, latest Java, like twenty one and twenty two, you have sealed classes where you can prevent, uh, inheritance of those classes. Uh, you can control the inheritance. You can create specific records and tightly controlled in a tightly tightly controlled manner. Also, you have flexible switches where you can, uh, call different functions of different types. Also, switch cases are now more flexible. They are not tied to a particular data type. And a lot of, uh, coding style and syntaxes improvements are there. Now the code focus in Java twenty one twenty two is more on the, uh, declarative style of programming rather than imperative. So you, uh, you spend less time on, uh, writing the syntaxes and telling how to do it rather than the focus is on what to do it.
So the problems in this particular code I see is, uh, there is no error handling, um, in the current design. And, uh, if messages processor, uh, throws an exception, Kafka may repeatedly deliver redeliver the messages, uh, in an infinity loop or commit offsets, uh, prematurely. Also, there is no idempotency. Um, in the first message, it is reprocessed due to retries and consumer restarts. Uh, side effects like, uh, DB writes or API calls may execute multiple times. Also, there is no acknowledgment of control. So by default, Spring Kafka commits offsets automatically, uh, like a c k mode dot batch or, uh, a c k mode dot record, depending on the config. So if, uh, if the processing fails, basically, uh, after commit offset, uh, the messages are lost. And, uh, also, there is no dead letter queue, so, like, failing messages stay in the topic indefinitely and, uh, or block the partition preventing other messages from being processed. There is no logging present, uh, like failures silent failures. Uh, you cannot know what has gone wrong. There is no back pressure handling control. So, like, if messages read the spike, the consumer may, uh, overload or crash down. So, yeah, I think these are some of the issues in this current code of implementation.
So the potential issues, uh, with the very large input strings for this particular type of program is that, uh, there is high memory usage that is when on the spin space time complexity, and each opening bracket pushes into a new stack for extremely large inputs, like millions of characters. Uh, this this consumes a lot of heap memory. Uh, possible consequences could be out of memory errors, very frequently out of memory errors, or heavy g, uh, garbage collection pressure because, uh, in the JVM, they constantly need to monitor and then they need to, uh, remove those, uh, remove the data. There also will be performance, uh, overhead of the stack class, uh, because it Java dot little dot stack is basically synchronized, uh, which adds unnecessary locking overhead and, uh, in single threaded method. So for large inputs, this slows down processing. There is boxing overhead also because stack is using, uh, character. If if let's say stack uses the, uh, wrapper character class. So from converting from primitive to, uh, non primitive and non primitive to primitive could be an extra overhead and, uh, too much load on the GC. So to optimize this, basically, we can use, uh, array d queues, so which are non synchronized and memory efficient. Uh, they can avoid synchronization overhead. And further optimizations can be, uh, the data can be streamed, uh, processed the stream chunks by chunks, and, uh, from the reader, uh, or a stream, uh, instead of loading the entire stream. Uh, early exit optimizations, like if you know how the input is open brackets with no chance of matching, then imbalance already visible. We can abort the program early. You can have custom primitive stacks like a a a primitive care array stack to completely avoid object boxing or an an unboxing.
So the important design considerations, what in this we can make is, uh, can the caller realistically, uh, recover from the error? Yes. So if you use a checked exceptions like IO exceptions, uh, example, file not found, a missing config file, or temporary network issue, then caller might retry showing that error message or prompt via valid path. If there are no the pro if no, that means the program cannot or should not recover, uses an unchecked exception. For example, misconfiguration, developer bugs, or logical error. Sorry. So the and, also, second point is, uh, near responsibility and abstraction boundary. So, like, lower level libraries, infra or IODB networking should throw checked exceptions. They expose the real world expected failures. Caller decides what to do. Then higher level business logic or service can wrap checked exceptions into domain specific unchecked exceptions. This hides low level details and simplifies upper layer handling. Uh, then third is, like, API usability versus robustness trade off. Like, checked exceptions encourage robust error handling, but can make APIs verbose and hard to use. Unchecked exceptions make code cleaner, but risk runtime crashes. So, basically, the rule of thumb is library or infrastructure code, we should prefer checked exceptions. Uh, we should inform the caller early. And, uh, application or domain layer, we should prefer unchecked exceptions. For fourth point, uh, we should have consistency, like whatever whatever we choose, uh, be consistent across the code base or library. If some methods are thrown, checked another, wrap them, it leads to confusion and improper handling. And, uh, so, basically, in summary, the key design decisions depends on who can handle the error and how recoverable it is. If the caller has recovered, like, retrying or provide a new file path through a checked exception, such as IO exception. If the error is unrecoverable or you you want to basically abstract away from low level details, wrap it in an unchecked exception, ideally domain specific ones, uh, not something raw like runtime exception.
Sure. So in one of my back end projects, I applied, uh, factory pattern combination with both singleton for a notification service that supported multiple channels, emails, WhatsApp, and SMS. So each channel, uh, had a different integration provider like SMTP, Twilio, Meta APIs, etcetera. And I needed a clean way to instantiate the correct sender without clustering business, uh, business without collecting the business logic and with if else or switches statement. So I created a notification factory class responsible for returning the right implementation of the, uh, note notification center and interface based on the channel type. Uh, the factory pattern helps centralized object creation, uh, logic. So adding new channels like Slack or push notification required no change in existing service logic, just new class implementing the interface. A single method also ensured that, uh, only one factory instance instance existed across the application, reducing unnecessary object creation and improving performance. So overall, these patterns made the design more, uh, maintainable and scalable as a system design.