Skip to content
View uannabi's full-sized avatar
πŸ“ˆ
Working from home
πŸ“ˆ
Working from home

Block or report uannabi

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
uannabi/README.md

πŸ‘‹ Hey there,

I am a Data Scientist and Engineer with over a decade of experience in building and optimizing end-to-end data systems. My expertise encompasses the entire data lifecycle, from designing user interfaces and capturing data through APIs to processing, storing, and transitioning data into lakes and warehouses for machine learning and analytics. I have extensive experience with tools such as Python, R, and SQL for programming; Tableau, Matplotlib, and Seaborn for data visualization; and cloud platforms like AWS, Snowflake, Redshift, and BigQuery for scalable data solutions. Over the years, I have designed and implemented scalable ETL pipelines capable of processing over 10 million records daily, ensuring efficient data transformation and actionable insights. My work in data migration includes transitioning over 500TB of data from MongoDB to Snowflake and PostgreSQL with 100% accuracy while enhancing scalability by 40% and reducing processing time by 30% through innovations like integrating Dask with FastAPI. Additionally, I have developed dashboards monitoring over 50 KPIs, providing real-time insights that improved forecast accuracy by 20%, reduced operational costs by 10%, and contributed to a 25% year-over-year growth in business performance. My strong background in system design and optimization allows me to deliver a seamless data journey, enabling a 360-degree view of user lifecycles and supporting critical decision-making. I am also proficient in version control systems like Git and GitHub for collaborative development, and I have implemented CI/CD pipelines using GitLab and Jenkins to streamline workflows. My passion lies in transforming complex datasets into meaningful narratives that empower organizations to make data-driven decisions. Whether it’s developing machine learning models, creating interactive visualizations, or designing data pipelines, I am committed to delivering high-impact solutions that drive growth and innovation.

LinkedIn Medium Kaggle LeetCode

Snake animation

πŸ“ž Let's Connect!

Connect on LinkedIn.

🧰 Tools I've experienced with

Python Badge R Badge Django Badge Scala Badge Pandas Badge Plotly Badge Anaconda Badge Jupyter Badge NumPy Badge Databricks Badge Metabase Badge Docker Badge Kubernetes Badge Git Badge Linux Badge Apache Spark Badge Elastic Search Badge Amazon AWS Badge Google Cloud Badge Google Colab Badge Kaggle Badge Tableau Badge


class DataAnalyticsEngineer:
    def __init__(self, name, degree, languages, tools):
        self.name = name
        self.degree = degree
        self.languages = languages
        self.tools = tools
        self.stakeholders = []

    def introduction(self):
        return f"Here I am, {self. name}, fortified by a robust passion for converting complex data into groundbreaking solutions, all backed by the collaborative and versioning strengths of GitHub."

    def educational_background(self):
        return f"I hold a {self. degree} in Computer Science and Engineering."

    def technical_skills(self):
        languages = ', '.join(self.languages)
        tools = ', '.join(self.tools)
        return f"Further enhanced by a mastery of programming languages like {languages}, and proficient use of cutting-edge analytics tools like {tools}."

    def data_capabilities(self):
        return "My background spans data engineering, predictive analytics, data visualization, and machine learning, proving my ability to turn multifaceted data into compelling narratives and actionable insights."

    def unique_selling_points(self):
        return "What sets me apart is my technical acumen and my deep-rooted understanding of leveraging data as a strategic asset for solving complex business issues and driving informed decisions."

    def collaboration(self):
        stakeholders = ', '.join(self.stakeholders)
        return f"Thriving in collaborative settings, I effortlessly engage with stakeholders at every organizational level: {stakeholders}."

    def ambition(self, organization):
        return f" As a proactive self-starter and an invaluable team player, I am eager to deploy my broad data engineering and analytics skills to elevate {organization} to new heights of excellence."

    def add_stakeholder(self, stakeholder):
        self. stakeholders.append(stakeholder)

    def full_profile(self):
        return f"{self.introduction()}\n{self.educational_background()}\n{self.technical_skills()}\n{self.data_capabilities()}\n{self.unique_selling_points()} \n {self.collaboration()}\n{self.ambition('LSEG')}"

if __name__ == "__main__":
    engineer = DataAnalyticsEngineer(name="Zahid Un Nabi", 
                                     degree="Bachelor's Degree", 
                                     languages=['Python', 'SQL'], 
                                     tools=['Tableau'])

    engineer.add_stakeholder('Business Analysts')
    engineer.add_stakeholder('Data Scientists')
    engineer.add_stakeholder('Product Managers')

    print(engineer.full_profile())

  • πŸ”­ Currently working at Post Trade London Clearing House LSEG
  • 🌱 I’m currently working with a large number of financial data.
  • πŸ‘― No collaboration !=01000101 01001111 01000100
  • πŸ€” I’m looking for 01011111 01011111 01101001 01101110 01101001 01110100 01011111 01011111
  • πŸ’¬ Ask me about 01100100 01100001 01110100 01100001 00100000

Articles on Medium

The first one ABC about Big Data Analysis 5 V for Big Data Analysis

Build your own data lake Data Lake on AWS

ETL explained using aws! ETL Techniques

First Missing Positive Problem-solving

Train your personal AI Model AI Model


Pinned Loading

  1. LinearRegrassionPySpark LinearRegrassionPySpark Public

    Machine Learning Algorithm Linear Regression using pySpark

    Jupyter Notebook

  2. PySparkExercise PySparkExercise Public

    Pyspark Basic problem solve using Jupyter Notebook

    Jupyter Notebook 1

  3. Python-dash-tw-sentiment Python-dash-tw-sentiment Public

    Sentiment analysis using python

    Python

  4. RxPY RxPY Public

    Forked from ReactiveX/RxPY

    Reactive Extensions for Python

    Python

  5. soul-stone soul-stone Public

    Serverless service using lambda and tweeting stream

    Python

  6. SparkDataFrame SparkDataFrame Public

    Pyspark Dataframe basics

    Jupyter Notebook 1