Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Submission Team ->Suraksha Nivesh #10

Open
wants to merge 27 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
78 changes: 71 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,18 +9,82 @@

## README.md must consist of the following information:

#### Team Name -
#### Problem Statement -
#### Team Leader Email -
#### Team Name - SurakshaNivesh
#### Problem Statement - Identifying Misleading Claims
#### Team Leader Email - [email protected]

## A Brief of the Prototype:
This section must include UML Diagrams and prototype description

The developed prototype of SurakshaNivesh offers a tangible glimpse into the platform's envisioned functionalities and user experience. It is a dynamic visualization of the innovative solutions that SurakshaNivesh brings to the realm of investment. The frontend is built in React with the backend and AI algos in python.

Through the prototype, users can interact with a simulated version of the platform's interface, exploring key features such as influencer credibility analysis, real-time sentiment monitoring, personalized recommendations, and collaboration with regulatory authorities. Each feature is showcased in an intuitive manner, allowing users to understand how they function and contribute to safeguarding investments.

The prototype's interface mirrors the actual user journey, from navigating the dashboard to accessing educational resources and receiving real-time alerts. It encapsulates the seamless integration of advanced technologies, including machine learning algorithms, real-time data analysis, and sentiment analysis, all orchestrated to provide users with accurate information and proactive protection.
<img width="1114" alt="Screenshot 2023-08-24 at 4 03 16 PM" src="https://github.com/yashramani7/Empowering-Investors-Hackathon/assets/76622287/3b59f243-0f2b-4cdf-991d-8371f192ccbb">
<img width="1114" alt="Screenshot 2023-08-24 at 4 03 29 PM" src="https://github.com/yashramani7/Empowering-Investors-Hackathon/assets/76622287/59ab72dd-16a6-420c-8b2e-fef0330f23ff">



## Tech Stack:
List Down all technologies used to Build the prototype
Frontend:
React (Frontend framework)
Redux (State management)
HTML, CSS, JavaScript (Frontend markup and styling)
Axios (HTTP requests)
Material-UI or other UI libraries (User interface components)
React Router (Navigation)

Backend:
Node.js (Backend runtime environment)
Express.js (Backend framework)
MongoDB or MySQL (Database management)
Mongoose (MongoDB ODM) or Sequelize (MySQL ORM)
Passport.js (Authentication)
JSON Web Tokens (JWT) (Authentication and authorization)
REST API or GraphQL (API communication)

Machine Learning and AI:
Python (Machine learning and data processing)
scikit-learn, TensorFlow, or PyTorch (Machine learning libraries)
Natural Language Processing (NLP) libraries (spaCy, NLTK)
Sentiment analysis libraries

Real-time Monitoring:
WebSocket (Real-time communication)

Data Analysis and Anomaly Detection:
Python (Data analysis and processing)
pandas, numpy (Data manipulation and analysis)
Machine learning libraries for anomaly detection (scikit-learn, Isolation Forest)

Collaboration with Regulatory Authorities:
APIs for regulatory authorities' systems

Testing and Deployment:
Jest, React Testing Library (Frontend testing)
Mocha, Chai (Backend testing)
Continuous Integration and Deployment tools (e.g., Jenkins, Travis CI)
Docker (Containerization)
Nginx or Apache (Web server)
Cloud platforms for deployment (e.g., AWS, Heroku)

Version Control and Collaboration:
Git (Version control)
GitHub, GitLab, Bitbucket (Code hosting and collaboration)

Others:
Postman (API testing)
VS Code or preferred code editor
Command-line tools for development and deployment

## Step-by-Step Code Execution Instructions:
This Section must contain a set of instructions required to clone and run the prototype so that it can be tested and deeply analyzed
The prototype is built on React.js, All the files are in this repo.
But to make it easier for demonstration, I have hosted it on Figma's Prototype server too, so you can take a look by clicking through the following link:
https://bit.ly/surakshanivesh
or
https://www.figma.com/proto/pw7ZcD3tLDpszEpGGSe3wH/SurakashNivesh?page-id=0%3A1&type=design&node-id=3-1297&viewport=1020%2C175%2C0.16&t=uZWIy4pEp209U437-1&scaling=min-zoom&starting-point-node-id=3%3A1297&mode=design

The app is just in prototype phase as of now, and contains dummy data for now.

## What I Learned:
Write about the biggest learning you had while developing the prototype
Embarking on the development journey of the SurakshaNivesh prototype as a solo endeavor was a remarkable learning experience that brought forth profound insights into the realm of technology implementation. Among the pivotal takeaways was the seamless integration of diverse AI algorithms, which demanded an in-depth grasp of orchestrating intricate machine learning models, real-time data analysis, and sentiment analysis techniques. This journey underscored the transformative potential of harnessing advanced AI technologies to fashion a cohesive platform that empowers investors with informed decision-making capabilities. Additionally, crafting a user-centric design unveiled the significance of creating intuitive interfaces that resonate with users and harmonize seamlessly with the complexity of advanced functionalities. While steering through the development process, the multifaceted nature of the project demanded hands-on engagement across disciplines, spanning the roles of a UI/UX designer, backend and frontend developer, data scientist, and even regulatory compliance expert. Furthermore, this autonomous expedition granted a firsthand understanding of aligning AI-driven features with stringent regulatory norms, shedding light on the intricate compliance landscape. Ultimately, the process of conceptualizing and crafting the SurakshaNivesh prototype proved instrumental in enhancing technical acumen while unearthing the immense potential that AI algorithms hold in shaping innovative financial technology solutions.
43 changes: 43 additions & 0 deletions SurakshaNivesh/Algos/SEBIRULESample.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
import re
import spacy

# Load the spaCy model
nlp = spacy.load("en_core_web_sm")

# Sample text extracted from Telegram group or YouTube video
extracted_text = """
Trader A: Buy Reliance at 2400, it's a sure shot tip!
Trader B: Let's all buy Infosys at 1000 for guaranteed profit!
"""

# Sample SEBI norms rules
sebi_rules = {
'stock_price_range': (0, 5000),
'prohibited_phrases': ['guaranteed profit', 'sure shot tip']
}

# Function to check for SEBI norm violations using advanced NLP
def check_sebi_norm_violations_advanced(text, rules):
violations = []
doc = nlp(text)

for rule, condition in rules.items():
if rule == 'stock_price_range':
for ent in doc.ents:
if ent.label_ == 'MONEY' and condition[0] <= float(ent.text) <= condition[1]:
violations.append(f"Violation: Stock price out of range - {ent.text}")
elif rule == 'prohibited_phrases':
for sent in doc.sents:
for phrase in condition:
if phrase in sent.text:
violations.append(f"Violation: Prohibited phrase - {phrase}")
return violations

# Check for SEBI norm violations using advanced NLP
violations = check_sebi_norm_violations_advanced(extracted_text, sebi_rules)
if violations:
print("SEBI Norm Violations Detected:")
for violation in violations:
print(violation)
else:
print("No SEBI Norm Violations Detected")
28 changes: 28 additions & 0 deletions SurakshaNivesh/Algos/SentimentIntensityAnalyser.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
import pytesseract
from PIL import Image
import nltk
from nltk.sentiment import SentimentIntensityAnalyzer

# Perform OCR on an image to extract text
def extract_text_from_image(image_path):
img = Image.open(image_path)
text = pytesseract.image_to_string(img)
return text

# Analyze text sentiment using NLTK SentimentIntensityAnalyzer
def analyze_sentiment(text):
sia = SentimentIntensityAnalyzer()
sentiment_scores = sia.polarity_scores(text)
sentiment = "positive" if sentiment_scores['compound'] > 0 else "negative" if sentiment_scores['compound'] < 0 else "neutral"
return sentiment, sentiment_scores

# Example usage
image_path = 'screenshot.png'
text = extract_text_from_image(image_path)
sentiment, sentiment_scores = analyze_sentiment(text)

print("Extracted Text:")
print(text)
print("Sentiment:", sentiment)
print("Sentiment Scores:", sentiment_scores)

62 changes: 62 additions & 0 deletions SurakshaNivesh/Algos/credibilityscorechecker.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
import numpy as np

# Sample trader's past recommendations and their outcomes
trader_history = {
'Buy Reliance at 2400': 'Profit',
'Sell TATA Motors at 300': 'Loss',
'Buy Infosys at 1500': 'Profit',
'Sell HDFC Bank at 1100': 'Profit',
'Buy Maruti at 7000': 'Loss',
'Sell ICICI Bank at 400': 'Profit'
}

# Sample stock market data for the calculation
stock_prices = {
'Reliance': [2300, 2405, 2380, 2450, 2490],
'TATA Motors': [310, 290, 305, 290, 280],
'Infosys': [1600, 1550, 1605, 1620, 1650],
'HDFC Bank': [1050, 1105, 1120, 1150, 1170],
'Maruti': [7200, 7100, 6900, 7000, 6800],
'ICICI Bank': [420, 410, 400, 390, 395]
}

# Calculate credibility score based on accuracy, consistency, and risk-adjusted returns
def calculate_credibility_score(history, stock_data):
total_recommendations = len(history)
successful_recommendations = sum(1 for outcome in history.values() if outcome == 'Profit')

consistency_factor = calculate_consistency(history)
risk_adjusted_returns = calculate_risk_adjusted_returns(history, stock_data)

credibility_score = (successful_recommendations / total_recommendations) * consistency_factor * risk_adjusted_returns
return credibility_score

# Calculate consistency factor
def calculate_consistency(history):
outcome_counts = {'Profit': 0, 'Loss': 0}
for outcome in history.values():
outcome_counts[outcome] += 1

consistency_factor = outcome_counts['Profit'] / (outcome_counts['Profit'] + outcome_counts['Loss'])
return consistency_factor

# Calculate risk-adjusted returns
def calculate_risk_adjusted_returns(history, stock_data):
returns = []
for recommendation, outcome in history.items():
stock_name = recommendation.split()[1]
initial_price = stock_prices[stock_name][0]
final_price = stock_prices[stock_name][-1]

if outcome == 'Profit':
returns.append((final_price - initial_price) / initial_price)
else:
returns.append((-1) * (final_price - initial_price) / initial_price)

average_return = np.mean(returns)
risk_adjusted_returns = average_return / np.std(returns)
return risk_adjusted_returns

# Calculate and print the credibility score
credibility_score = calculate_credibility_score(trader_history, stock_prices)
print(f"Trader's Credibility Score: {credibility_score:.2f}")
48 changes: 48 additions & 0 deletions SurakshaNivesh/Algos/pumpanddump.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
import numpy as np
from scipy.signal import find_peaks
# stock data
np.random.seed(42)
num_days = 30
stock_prices = np.random.randint(50, 150, num_days)
trading_volumes = np.random.randint(1000, 5000, num_days)

# Define parameters
price_threshold_factor = 1.5 # Abnormal price increase threshold
volume_threshold_factor = 2.0 # Abnormal volume increase threshold
moving_average_window = 5 # Window size for calculating moving averages
price_anomaly_threshold = 3.0 # Abnormal price anomaly threshold
price_peak_threshold = 0.8 # Threshold for detecting price peaks

# Detect potential pump and dump manipulation
for i in range(moving_average_window, num_days):
moving_average_price = np.mean(stock_prices[i - moving_average_window : i])
moving_average_volume = np.mean(trading_volumes[i - moving_average_window : i])

current_price = stock_prices[i]
current_volume = trading_volumes[i]

price_increase_factor = current_price / moving_average_price
volume_increase_factor = current_volume / moving_average_volume

# Detect rapid price increase compared to moving average
if price_increase_factor > price_threshold_factor:
# Detect abnormal price anomaly
price_anomaly = abs(current_price - moving_average_price) / moving_average_price
if price_anomaly > price_anomaly_threshold:
# Detect price peaks indicating potential manipulation
price_peaks, _ = find_peaks(stock_prices[i - moving_average_window : i], height=current_price * price_peak_threshold)
if len(price_peaks) > 0:
print(f"Potential pump and dump manipulation detected on Day {i+1}!")
print(f"Price Anomaly: {price_anomaly:.2f}, Moving Average Price: {moving_average_price}")
print(f"Current Price: {current_price}, Moving Average Volume: {moving_average_volume}")
print(f"Price Peaks: {price_peaks + i - moving_average_window}")
print("---")

# Detect rapid volume increase compared to moving average
if volume_increase_factor > volume_threshold_factor:
print(f"Potential pump and dump manipulation detected on Day {i+1}!")
print(f"Current Volume: {current_volume}, Moving Average Volume: {moving_average_volume}")
print(f"Current Price: {current_price}, Moving Average Price: {moving_average_price}")
print("---")

print("Detection process completed.")
30 changes: 30 additions & 0 deletions SurakshaNivesh/Algos/tradingcalltimestampalgo.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
import datetime

# Sample video transcript with timestamps
video_transcript = """
[00:02] Speaker 1: Buy Reliance at 2400
[00:30] Speaker 2: Sell Reliance at 2500
[01:10] Speaker 1: Buy TATA Motors at 300
"""

# Sample stock price data with timestamps
stock_price_data = {
'timestamp': [
'00:00', '00:10', '00:20', '00:30', '00:40', '00:50', '01:00', '01:10'
],
'price': [
2300, 2350, 2370, 2385, 2398, 2410, 2450, 300
]
}

# Convert timestamps to datetime objects
stock_price_data['timestamp'] = [datetime.datetime.strptime(ts, '%M:%S') for ts in stock_price_data['timestamp']]

# Extract trading call timestamps from transcript
call_timestamps = [datetime.datetime.strptime(ts.split(':')[1], '%S]') for ts in video_transcript.split('[')[1:]]

# Compare call timestamps with stock price timestamps
for call_ts in call_timestamps:
closest_price_index = min(range(len(stock_price_data['timestamp'])), key=lambda i: abs(stock_price_data['timestamp'][i] - call_ts))
closest_price = stock_price_data['price'][closest_price_index]
print(f"Trading call at {call_ts}: Stock price at that time was {closest_price}")
19 changes: 19 additions & 0 deletions SurakshaNivesh/Algos/youtubevideotranscriptor.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
from youtube_transcript_api import YouTubeTranscriptApi

# Get video ID from the YouTube video URL
def get_video_id(video_url):
return video_url.split("v=")[1]

# Get video transcripts
def get_transcripts(video_url):
video_id = get_video_id(video_url)
transcripts = YouTubeTranscriptApi.get_transcript(video_id)
text_transcripts = [transcript['text'] for transcript in transcripts]
return '\n'.join(text_transcripts)

# Example usage
video_url = 'https://www.youtube.com/watch?v=VIDEO_ID'
transcripts = get_transcripts(video_url)

print("Transcripts:")
print(transcripts)
15 changes: 15 additions & 0 deletions SurakshaNivesh/BriefOfThePrototype/Description
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
# Brief

The prototype of SurakshaNivesh presents an interactive visualization of its robust set of features, cutting-edge algorithms, and advanced techniques. It offers a hands-on exploration of the platform's functionalities, highlighting its innovative solutions to complex investment challenges.

The prototype showcases the intricate workings of SurakshaNivesh's influencer credibility analysis, employing sophisticated algorithms that assess influencers' historical performance, accuracy of past recommendations, and patterns in their claims. Users can witness real-time sentiment analysis, which continuously monitors social media and investment forums, providing an overall sentiment score to gauge trustworthiness.

The visualization delves into the machine learning-based fraud detection mechanism, illustrating how the platform employs historical data analysis and anomaly detection algorithms to identify potential pump and dump schemes. Users can observe the platform's dynamic response to suspicious trading activities and abnormal price movements, ensuring a protective shield against manipulated stocks.

Educational resources are also spotlighted, showcasing the platform's library of articles, videos, and tutorials. The prototype reveals how these resources empower users with knowledge about investment scams, risk management, and fundamental analysis, thereby enhancing financial literacy and reducing vulnerability to misleading claims.

The prototype encapsulates SurakshaNivesh's personalized recommendation engine, leveraging machine learning to align investment suggestions with users' preferences and risk profiles. By simulating the process, users can witness the integration of influencer credibility, sentiment analysis, and historical data to generate tailored investment advice.

Furthermore, the prototype unveils the collaboration with regulatory authorities, underscoring how SurakshaNivesh actively reports and escalates suspicious activities. This feature demonstrates the platform's commitment to timely intervention and legal action against fraudulent schemes.

In summary, the SurakshaNivesh prototype provides a comprehensive view of its features, demonstrating the intricate interplay of advanced algorithms, real-time data analysis, machine learning, and collaborative mechanisms. It's an immersive insight into how the platform empowers investors to navigate the securities market with transparency, trust, and protection against misleading investment claims.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading