profile-pic
Vetted Talent

Prem Vijaykar

Vetted Talent
Seeking quality environment that enables me to come up with the emerging as well as latest technologies that play vital role for the organization's growth and which gives me the scope of widening the spectrum of my skills and knowledge and to successful employee in the emerging cutting edge technologies.
  • Role

    Data Engineer

  • Years of Experience

    3 years

Skillsets

  • Apache Airflow
  • Azure Data Factory
  • Azure DataBricks
  • Data Ingestion
  • data transformation
  • DWH Concepts
  • Informatica
  • Linux
  • Python
  • Snowflake
  • SnowSQL
  • SQL
  • Nifi sql

Vetted For

9Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    Senior Data Engineer With Snowflake (Remote)AI Screening
  • 50%
    icon-arrow-down
  • Skills assessed :Azure Synapse, Communication Skills, DevOps, CI/CD, ELT, Snowflake, Snowflake SQL, Azure Data Factory, Data Modelling
  • Score: 45/90

Professional Summary

3Years
  • Jan, 2023 - Present3 yr 1 month

    Data Engineer

    Tech Mahindra

Applications & Tools Known

  • icon-tool

    Extract, Transform, Load (ETL)

  • icon-tool

    Microsoft Azure

  • icon-tool

    Data Lakes

  • icon-tool

    Azure Data Factory

  • icon-tool

    Data Management

  • icon-tool

    Data Architecture

  • icon-tool

    Data Modeling

  • icon-tool

    DataStage

  • icon-tool

    SQL

  • icon-tool

    Teradata

  • icon-tool

    Python

  • icon-tool

    Star Schema

  • icon-tool

    Data Ingestion

  • icon-tool

    Modeling Tools

  • icon-tool

    Batch Processing

  • icon-tool

    Query Tuning

  • icon-tool

    UDF

  • icon-tool

    Stored Procedures

  • icon-tool

    Datasets

  • icon-tool

    Data Quality Assurance

  • icon-tool

    Data Quality

Work History

3Years

Data Engineer

Tech Mahindra
Jan, 2023 - Present3 yr 1 month
    Working on a Telecom Domain Migration Project responsible to successfully implement Unified Data Platform where we migrated teradata objects to Azure Databricks and then monitoring the jobs using the orchestration tool Airflow.

Major Projects

2Projects

Optus UDP-Migration

AI Based Symmetric Answer Evaluation System \x0c"

Education

  • Bachelor of Engineering

    Prof.Ram Meghe Institute Of Technology And Research (2020)

Certifications

  • Microsoft Certified: Azure Data Engineer Associate

    Microsoft
  • Microsoft Certified: Azure Data Engineer Associate

    Microsoft
  • Microsoft Certified: Azure Data Engineer Associate

    Microsoft

AI-interview Questions & Answers

Thanks for watching!

Hi, I am Prem Vijayakumar. I have 3 years of experience as a data engineer. Currently, I am working in... ... ... ... ... ... ... ... ... ... ...

Okay. I need more time. Version control like gate to track changes to your SQL scripts and other artifacts related to data transformation, scripting, writing data transformation in SQL script or stored procedures by keeping the transformation in script. We can easily track changes and roll back to previous version if needed. Naming convention, establish clean, clear naming conventions for your other objects for your SQL scripts, stored procedures, and other objects, documentation, document your data transformation, testing, implementing automated testing for your data transformation, continuous integration, continuous deployment, integrate version control system with CI-CD pipeline to automate deployment and testing of data transformation.

What do you recommend for a... use of try-catch block, snowflake support try-catch block similar, support try-catch block. These blocks are used to encapsulate the code that might raise an error and catch specific types of error to handle them appropriately. Logging and alerting, log error messages and other information to a logging table or an external logging system. Transaction management wraps executable statement within explicit transaction to ensure data consistency, graceful error handling, error recovery, monitoring and mitresis, testing and validation. These are the practices that would be recommended for snowflake for error handling in snowflake.

And we're gonna go ahead and get started. Which to optimize ELT processes while landing semi-structured data includes the right storage so store semi-structured data in Snowflake using variant data types, which allows flexibility in handling different types of semi-structured data like JSON, XML. You can consider using separate tables or stages for different types of semi-structured data to optimize querying performance, partitioning and clustering, Snowflake clustering and partitioning feature to optimize query performance. When semi-structured data clustering is can help organize data physically on disk, reducing the amount of data scanned during queries, partitioning can improve query performance by dividing data into smaller, more manageable chunks, data ingestion, use Snowflake data ingestion, Snowflakes copy into command for efficient bulk data ingestion from various sources including semi-structured data files stored in Cloudflare platform like Amazon S3 or Azure Blob Storage. This command supports parallel data loading, compression and automatic file from format detection, optimizing data ingestion, performance, optimized query, materialized views, autoscaling, query profiling and optimization, data compression and storage. These are all the ways to optimize CLD process.

It is an interesting system to run, sound and audio feedback is good. The board is very sound. What's interesting about it that I reallyped with my back of my hand. This system provides the revolutionary sound. If you look at my hands on my lap top post. Snowflake information schema, Snowflake performance dashboards, Snowflake worksheet, Snowflake query profiling are the techniques that are used to ensure Snowflake performance training and query optimization.

Your attention please. So the board is attempting to create a materialized view from the raw data table where the status is active and the created date is within the last 30 days. The issue is with the closing curly brace which could cause syntax error plus the get data function is a SQL server function if the database system use doesn't support it the query will fail also the date diff function usage might differ based on the database system. After that there is a missing closing parenthesis after the table reference raw data.

I'm going to do a little bit of a tour of the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the

clustering keys, unique constraint, materialized views, ATL process optimizations, snowflake deduplication feature, monitoring and optimization. With these techniques and utilizing snowflake's built-in feature, we can implement data deduplication effectively while minimizing the impact on query performance.

To use data built-in tool in conjunction with snowflake to transform data for complex reporting need, we can use these general steps like setup, project initialization, connection configuration, modeling, testing, documentation, running dbt, deployment, scheduled runs, monitoring and maintenance.

connectivity, ADF, ADM, ADM2, ADM3, ADM4, ADM5, ADM6, ADM7, ADM8, ADM9, ADM10, ADM11, ADM9, ADM10, ADM11, ADM11+, ADM5, ADM10+, ADM11+, ADM5, ADM6, ADM5, ADM7, ADM8, ADM1,- ADM6, ADM5, ADM7. So the http interefere data platform offers various connectors including https, rest and web activities which can be used to interact with third party APIs. Authentications. Many API require authentication. ADF support various authentication methods allowing you to securely authenticate with some API paginate results to limit the number of records done. In each response, ADF allows you to handle pagination using looping constructs or custom script within pipeline activities to retrieve all desired data, rate limits, error handling, monitoring and logging, custom activities, data processing.