Skip to content
View hleec1's full-sized avatar
  • Seoul, South Korea

Block or report hleec1

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
hleec1/README.md

Hello!/Hola!/안녕하세요!/Salut! 👋

My name is Hyun Jun Lee, born and raised in Mexico and currently pursuing my bachelors degree in Software Engineering/Computer Science in South Korea. I am currently in my 4th year and attending Sung Kyun Kwan University.

🔭 Currently in the works...

  • Finance bots to help trading in markets where it isn't developed enough
  • NLP model tuning to help with sentiment classification in reddit regarding topics of interest
  • Recommender system for Korean cosmetics (E-commerce)

📫 How to reach me: ...

-E-mail: [email protected] 📫
-Socials: @hyunjunLC 😄

Popular repositories Loading

  1. MNIST-CNN-handwritten-letters-recognition MNIST-CNN-handwritten-letters-recognition Public

    Convolutional Neural Networks Models

    Python

  2. Recommender-Systems-Collaborative-Filtering--of-yelp-ratings Recommender-Systems-Collaborative-Filtering--of-yelp-ratings Public

    Using collaborative filtering to build a recommender system using Yelp ratings dataset

    Python

  3. Multi-Label-Sentiment-Analysis-in-Tweets-Using-BERT Multi-Label-Sentiment-Analysis-in-Tweets-Using-BERT Public

    This is the code to train and create a BERT model that uses Multi-Class Sentiment classification. Using a dataset of tweets, it trains the model to classify a fragment of text into the 6 emotions e…

    Jupyter Notebook

  4. K-Means-clustering-of-articles-related-to-big-data-Wikipedia K-Means-clustering-of-articles-related-to-big-data-Wikipedia Public

    Using Beautiful Soup, extract all articles in in the related fields of the Big Data article in Wikipedia (50). Then extract the contents and use TfidfVectorizer to get the tfidf vector of each arti…

    Jupyter Notebook

  5. hleec1 hleec1 Public

  6. SWE_2021_41_2024_1_week_2 SWE_2021_41_2024_1_week_2 Public

    Repository for Open Source Assignment Week 2