LibreChat is a free, open source AI chat platform. This Web UI offers vast customization, supporting numerous AI providers, services, and integrations. Serves all AI Conversations in one place with a familiar interface, innovative enhancements, for as many users as you need.
The full librechat documentation is available here
Let's discover how to use LibreChat to create efficient and effective conversations with AI for developers.
Prompts history allows users to save and load prompts for their conversations and easily access them later. Reusing prompts can save time and effort, especially when working with multiple conversations and keep track of the context and details of a conversation.
The presets feature allows users to save and load predefined settings for initialise a conversations. Users can import and export these presets as JSON files, set a default preset, and share them with others.
The prompts feature allows users to save and load predefined prompts to use it during their conversations. You can use a prompt with the /[prompt command]. A prompt can have parameters, which are replaced with values when the prompt is used.
Exemple of preformatted prompts : Explain the following code snippet in Java, Kotlin or Javascript
Click on the + button to add a new prompt.
name your prompt : explain
on Text tab, you can write your prompt :
Explain the following {{language:Java|Kotlin|Javascript}} snippet of code:
+{{code}}
+
Now you can use the /explain command to get the explanation of the code snippet.
Azure OpenAI Service provides REST API access to OpenAI's powerful language models, including the o1-preview, o1-mini, GPT-4o, GPT-4o mini, GPT-4 Turbo with Vision, GPT-4, GPT-3.5-Turbo, and Embeddings model series.
Gemini is a large language model (LLM) developed by Google. It's designed to be a multimodal AI, meaning it can work with and understand different types of information, including text, code, audio, and images. Google positions Gemini as a highly capable model for a range of tasks, from answering questions and generating creative content to problem-solving and more complex reasoning. There are different versions of Gemini, optimized for different tasks and scales.
Claude is an Artificial Intelligence, trained by Anthropic. Claude can process large amounts of information, brainstorm ideas, generate text and code, help you understand subjects, coach you through difficult situations, help simplify your busywork so you can focus on what matters most, and so much more.
The Assistants API enables the creation of AI assistants, offering functionalities like code interpreter, knowledge retrieval of files, and function execution. The Assistants API allows you to build AI assistants within your own applications for specific needs. An Assistant has instructions and can leverage models, tools, and files to respond to user queries. The Assistants API currently supports three types of tools: Code Interpreter, File Search, and Function calling.
The plugins endpoint opens the door to prompting LLMs in new ways other than traditional input/output prompting.
Warning
Every additional plugin selected will increase your token usage as there are detailed instructions the LLM needs for each one For best use, be selective with plugins per message and narrow your requests as much as possible
Dall-e 3 is a librechat Plugin for generating images from text. You can use it to generate images from text, such as product descriptions, product images, or even documentation images to illustrate your technical documentation.
Wolf is a librechat Plugin for WL Managagement System documents. The sharepoint documention is available here
Ask to WorldLine management system Friend everything you are looking for in the WMS content. AskWOLF plugin is meant to help you navigate through the multitude of information provided by the WMS (Applicable Policies, Processes & Procedures, Transversal & Operations SP pages links, …). This Worldline LibreChat plugin relies on ChatGPT technologies.
Worldline Management System (WMS) is the Group reference for all information pertaining to our operating model such as applicable policies, processes and governance structures. Key responsibilities are :
consistently address its customers’ and markets’ requirements across all its geographies
continuous improvement of customer satisfaction through effective application of WMS
correct interpretation of applicable ISO standards requirements
You can mix plugins to create more complex prompts. For example, you can use the DALL-E 3 plugin to generate images from text and then use the IT support plugin to get support from the IT team.
Generate the favicon 16x16 pixels based on the content found in
+https://worldline.github.io/learning-ai/overview/ with Browser plugin
+and generate the favicon with DallE. I want no background and black and white image
+
RAG is possible with LibreChat. You can use RAG to create a conversation with the AI. To can add files to the conversation, you go to the file tab and select the file you want to add. Then the file will be added to the file manager and you can use it in the prompt.
The file can be an png, a video, a text file, or a PDF file.
Choose your favorite topic ( cooking, travel, sports, etc.) and create an assistant that can answer questions about it. You can share documents, files and instructions to configure your custom assistant and use it.
Be careful with offline prompting models downloaded from the internet. They can contain malicious code. And also the size of the model can be very large from few Gb to few Tb.
If you don't want to use the online AI providers, you can use offline prompting. This technique involves using a local LLM to generate responses to prompts. It is useful for developers who want to use a local LLM for offline prompting or for those who want to experiment with different LLMs without relying on online providers.
LM Studio is a tool that allows developers to experiment with different LLMs without relying on online providers. It provides a user-friendly interface for selecting and configuring LLMs, as well as a chat interface for interacting with the LLMs. It also includes features for fine-tuning and deploying LLMs. This technique is useful for developers who want to experiment with different LLMs.
You can configure the model you want to use in the settings tab. You can select the model you want to use and configure it according to your needs.
Context Length: The context length is the number of tokens that will be used as context for the model. This is important because it determines how much information the model can use to generate a response. A longer context length will allow the model to generate more detailed and relevant responses, but it may also increase the computational cost of the model.
GPU Offload: This option allows you to offload the model to a GPU if available. This can significantly speed up the generation process, especially for longer prompts or complex models.
CPU Threads: This option allows you to specify the number of CPU threads to use for the model. This can be useful for controlling the computational resources used by the model.
Evaluation batch size: This option allows you to specify the batch size for evaluation. This is important for evaluating the performance of the model and can affect the speed and accuracy of the generation process.
RoPE Frequency base: This option allows you to specify the frequency base for RoPE (Range-based Output Embedding). This is important for controlling the output length of the model and can affect the quality of the generated responses.
RoPE Frequency scale: This option allows you to specify the frequency scale for RoPE (Range-based Output Embedding). This is important for controlling the output length of the model and can affect the quality of the generated responses.
Keep model in memory: This option allows you to keep the model in memory after the generation process is complete. This can be useful for generating multiple responses or for using the model for offline prompting.
Try mmap() for faster loading: This option allows you to try using mmap() for faster loading of the model. This can be useful for loading large models or for generating responses quickly.
Seed: This option allows you to specify a seed for the model. This can be useful for controlling the randomness of the generated responses.
Flash Attention: This option allows you to enable flash attention for the model. This can be useful for generating more detailed and accurate responses, but it may also increase the computational cost of the model.
You can use the APIs to generate responses from the models. To enable the API server with LM Studio, you need to set the API Server option to ON in the settings tab. You can then use the API endpoints to generate responses from the models.
2024-11-15 18:45:22 [INFO][LM STUDIO SERVER] Success! HTTP server listening on port 1234
+2024-11-15 18:45:22 [INFO]
+2024-11-15 18:45:22 [INFO][LM STUDIO SERVER] Supported endpoints:
+2024-11-15 18:45:22 [INFO][LM STUDIO SERVER] -> GET http://localhost:1234/v1/models
+2024-11-15 18:45:22 [INFO][LM STUDIO SERVER] -> POST http://localhost:1234/v1/chat/completions
+2024-11-15 18:45:22 [INFO][LM STUDIO SERVER] -> POST http://localhost:1234/v1/completions
+2024-11-15 18:45:22 [INFO][LM STUDIO SERVER] -> POST http://localhost:1234/v1/embeddings
+2024-11-15 18:45:22 [INFO]
+2024-11-15 18:45:22 [INFO][LM STUDIO SERVER] Logs are saved into /Users/ibrahim/.cache/lm-studio/server-logs
+2024-11-15 18:45:22 [INFO] Server started.
+2024-11-15 18:45:22 [INFO] Just-in-time model loading active.
+
You can use the endpoints to generate responses from the models. The endpoints are as follows:
GET /v1/models: This endpoint returns a list of the available models.
POST /v1/chat/completions: This endpoint generates responses from the models using the chat format.Chat format is used for tasks such as chatbots, conversational AI, and language learning.
POST /v1/completions: This endpoint generates responses from the models using the completion format. Completion format is used for tasks such as question answering, summarization, and text generation.
POST /v1/embeddings: This endpoint generates embeddings from the models. Embeddings are used for tasks such as sentiment analysis, text classification, and language translation.
`,27)]))}const c=n(i,[["render",l],["__file","index.html.vue"]]),u=JSON.parse('{"path":"/offline/","title":"Offline with LM Studio","lang":"en-US","frontmatter":{},"headers":[{"level":2,"title":"Definitions","slug":"definitions","link":"#definitions","children":[]},{"level":2,"title":"Installation","slug":"installation","link":"#installation","children":[]},{"level":2,"title":"Model configuration","slug":"model-configuration","link":"#model-configuration","children":[]},{"level":2,"title":"enable APIs","slug":"enable-apis","link":"#enable-apis","children":[]},{"level":2,"title":"🧪 Exercises","slug":"🧪-exercises","link":"#🧪-exercises","children":[]},{"level":2,"title":"📖 Further readings","slug":"📖-further-readings","link":"#📖-further-readings","children":[]}],"git":{"updatedTime":1731692981000,"contributors":[{"name":"Brah","username":"Brah","email":"brah.gharbi@gmail.com","commits":1,"url":"https://github.com/Brah"}]},"filePathRelative":"offline/README.md"}');export{c as comp,u as data};
diff --git a/assets/index.html-BQRe6DYZ.js b/assets/index.html-BQRe6DYZ.js
new file mode 100644
index 0000000..a3b3bd4
--- /dev/null
+++ b/assets/index.html-BQRe6DYZ.js
@@ -0,0 +1,168 @@
+import{_ as s,c as a,a as e,o as t}from"./app-CpFOj0gG.js";const i={};function l(o,n){return t(),a("div",null,n[0]||(n[0]=[e(`
You can use Google collab for a simple to use notebook environment for machine learning and data science. It will provide a container with all the necessary libraries and tools to run your code and live editing interface through a browser.
A notebook is a document that contains live code, equations, visualizations, and narrative text. You can use Colab to create, share, and collaborate on Jupyter notebooks with others.
User interraction with collab
You can store your API keysafely in the userdata of your colab environment. Also you can upload files to your colab environment as follows:
+from google.colab import files
+from google.colab import userdata # For retrieving API keys
+
+# 1. Upload the file to your current colab environment ( a upload button will appear at the execution of the code)
+uploaded = files.upload()
+for fn in uploaded.keys():
+print('User uploaded file "{name}" with length {length} bytes'.format(
+ name=fn, length=len(uploaded[fn])))
+
+# get the API key from colab userdata ( left panel of colla, picto with the key)
+api_key=userdata.get('API_KEY')
+
+
Langchain is a framework for building applications powered by language models (LLMs) like OpenAI's GPT-3. It provides a set of tools and utilities for working with LLMs, including prompt engineering, chain of thought, and memory management. Langchain is designed to be modular and extensible, allowing developers to easily integrate with different LLMs and other AI services.
Json mode is a feature that allows you to send structured data to the model through the API instead of a text prompt. To use Json mode, you need to select the right endpoint in the API explorer and specify the input format as JSON in the prompt.
For OpenAI API, you can use the following format :
{
+"model":"text-davinci-003",
+"prompt":"Translate the following text to French: 'Hello, how are you?'",
+"max_tokens":100
+}
+
curl-H"Authorization: Bearer <your_api_key>"-H"Content-Type: application/json"-d'{"model": "text-davinci-003", "prompt": "Translate the following text to French: 'Hello, how are you?'", "max_tokens": 100}' https://api.mistral.ai/v1/chat/completions
+
+{
+"id":"chatcmpl-123456789",
+"object":"chat.completion",
+"created":1679341456,
+"model":"text-davinci-003",
+"choices":[
+{
+"index":0,
+"message":{
+"role":"assistant",
+"content":"Bonjour, comment ça va?"
+},
+"finish_reason":"stop"
+}
+],
+"usage":{
+"prompt_tokens":5,
+"completion_tokens":7,
+"total_tokens":12
+}
+}
+
Structured outputs are a feature that allows you to receive structured data from the model through the API. It is useful for working with models that require structured outputs, such as JSON.
To use structured outputs, you need to select the right endpoint in the API explorer and specify the output format in the prompt.
for OpenAI API, you can use the following format :
{
+"model":"text-davinci-003",
+"prompt":"Translate the following text to French: 'Hello, how are you?'",
+"max_tokens":100,
+"output":"json"
+}
+
the structured output can be as follow :
{
+"model":"text-davinci-003",
+"prompt":"Translate the following text to French: 'Hello, how are you?'",
+"max_tokens":100,
+"output":{
+"text":"Bonjour, comment ça va?"
+}
+}
+
+
Create a Python application that generates humorous motivational quotes for developers based on their name, favorite programming language, and a brief description of their current project or challenge.
Library for making API calls
You can use requests for making API calls in Python.
Expected Output
Enter your name: Ibrahim
+Enter your favorite programming language: kotlin
+Enter your current project description: conference app with KMP
+
+--- Motivational Quote ---
+Quote: "Code like you just ate a burrito... with passion, speed, and a little bit of mess!"
+Author: Unknown
+--------------------------
+
Depending on the LLM, langchain provides different APIs. Have a look at the following table here to see which APIs are available for your LLM.
Model Features
Tool Calling
Structured Output
JSON Mode
Image Input
Audio Input
Video Input
✅
✅
✅
❌
❌
❌
To use langchain with mistral, you need to install the langchain_mistralai package and create a ChatMistralAI object.
from langchain_mistralai.chat_models import ChatMistralAI
+# Define your API key and model
+API_KEY ='your_api_key'# Replace with your actual Mistral API key
+MISTRAL_API_URL ='https://api.mistral.ai/v1/chat/completions'
+llm = ChatMistralAI(api_key=API_KEY, model="open-mistral-7b")
+
Prompt templating is a powerful feature that allows you to create dynamic prompts based on the input data. It enables you to generate prompts that are tailored to the specific requirements of your application.
from langchain.prompts import PromptTemplate
+
+prompt = PromptTemplate(
+ input_variables=["text","language"],
+ template="translate the following text to {language}: {text}",
+)
+
Chain Chains refer to sequences of calls - whether to an LLM, a tool, or a data preprocessing step. It is a sequence of calls that are executed in order, with the output of one call being the input for the next call.It enables you to create complex workflows by combining the output of one LLM call with the input of another. This is useful for tasks that require multiple steps or interactions with external systems.
from langchain.chains import LLMChain
+
+input_data ={
+"text":"Hello, how are you?",
+"language":"French"
+}
+
+chain = prompt | llm_model
+response=chain.invoke(input_data)
+
Multiple prompt can be chained together to create complex workflows.
Create a Python application that generates humorous motivational quotes for developers based on their name, favorite programming language, and a brief description of their current project or challenge.
Expected Output
Enter your name: Ibrahim
+Enter your favorite programming language: kotlin
+Enter your current project description: conference app with KMP
+
+--- Motivational Quote ---
+Quote: "Code like you just ate a burrito... with passion, speed, and a little bit of mess!"
+Author: Unknown
+--------------------------
+
Steps
Create a function get_developer_motivation(name, language, project_description) that:
Takes a developer's name, their favorite programming language, and a brief description of their current project or challenge as input.
Uses langchain to send a request to the LLM to generate a humorous motivational quote.
Returns a structured response containing the quote, the developer's name, the programming language, and the project description.
Function/Tool calling is a feature that allows the llm to call existing functions from your code. It is useful for working with functions, such as APIs, and for interacting with models that require function calls. Once a tool function is created, you can register it as a tool within LangChain for being used by the LLM.
Build a command-line application that fetches weather data for a specified city using LangChain and a public weather API. The application will utilize implicit tool calling to allow the LLM to decide when to call the weather-fetching tool based on user input.
Ask about the weather (e.g., 'Lille, France'): Paris
+
+------------------------------------------------------------------------------
+The current weather in Paris is: overcast clouds with a temperature of 6.63°C.
+------------------------------------------------------------------------------
+
Configuration
Sign up for an API key from a weather service provider (e.g., OpenWeatherMap).
Define a function fetch_weather(city: str) -> dict that takes a city name as input and returns the weather data as a dictionary. Use the weather API to fetch the data.
Register the Weather Tool
Use the Tool class from LangChain to register the fetch_weather function as a tool.
Set Up the LangChain Components
Create a prompt template that asks about the weather in a specified city.
Instantiate the ChatMistralAI model with your Mistral API key.
Create a chain that combines the prompt template, the chat model, and the registered weather tool.
Handle User Input
Implement a function handle_user_input(city) that:
llama-index is a powerful tool for building and deploying RAG (Retrieval Augmented Generation) applications. It provides a simple and efficient way to integrate LLMs into your applications, allowing you to retrieve relevant information from a large knowledge base and use it to generate responses. RAG is a technique that leverages the power of LLMs to augment human-generated content.
Unstructured documents are a common source of information for RAG applications. These documents can be in various formats, such as text, PDF, HTML, or images. LlamaIndex provides tools for indexing and querying unstructured documents, enabling you to build powerful RAG applications that can retrieve information from a large corpus of documents.
Structured Data is another common source of information for RAG applications. This data is typically stored in databases or spreadsheets and can be queried using SQL or other query languages. LlamaIndex provides tools for connecting LLMs to databases and querying structured data, allowing you to build RAG applications that can retrieve information from databases.
#The database library used in this example is SQLAlchemy
+sql_database = SQLDatabase(engine, include_tables=["books"])
+query_engine = NLSQLTableQueryEngine(
+ sql_database=sql_database,
+ tables=["books"],
+ llm=llm,
+ embed_model=embed_model,
+)
+
+query_engine.query("Who wrote 'To Kill a Mockingbird'?")
+
Create a Python application that provide a txt document containings a list of application comments and make sentiment analysis on it with llama-index.
Your customer review txt file :
Review 1: I was very disappointed with the product. It did not meet my expectations.
+Review 2: The service was excellent! I highly recommend this company.
+Review 3: I had a terrible experience. The product was faulty, and the customer support was unhelpful.
+Review 4: I am extremely satisfied with my purchase. The quality is outstanding.
+
Expected Shell Output:
Saving customer_reviews.txt to customer_reviews (4).txt
+User uploaded file"customer_reviews (4).txt" with length 338 bytes
+The customers' experiences with the company and its products vary. Some have had positive experiences, such as excellent service and high-quality products, while others have encountered issues with faulty products and unhelpful customer support.
+
Create a Python application that initializes a list of languages and their creators with sqlalchemy and requests the LLM to retrieve the creators of a language. The LLM should be able to understand the context and retrieve the relevant information from the database.
GCP is a suite of cloud computing services provided by Google. It includes a wide range of tools and services for building and consuming LLMs, such as Vertex AI, Google Colab, and ML Flow.
Gemini: Google's large language model (LLM), positioned as a competitor to OpenAI's GPT models. Gemini's capabilities are integrated into various Google products and services, and are also accessible through APIs. Different versions of Gemini (e.g., Gemini Pro, Gemini Ultra) offer varying levels of capability and access. It powers several consumer-facing features across Google's ecosystem.
AI Studio: Cloud-based machine learning platform offered by several companies, most notably Google with its Google AI Studio (now Vertex AI Studio). It provides APIs for leading foundation models, and tools to rapidly prototype, easily tune models with your own data, and seamlessly deploy to applications.
This is the central hub for most Google Cloud's AI/ML services. It integrates and supersedes many previous offerings.
Custom Training: Training machine learning models using various algorithms and frameworks (TensorFlow, PyTorch, scikit-learn, XGBoost, etc.). Provides access to managed compute instances (including TPUs).
Prediction: Deploying trained models for inference (making predictions). Offers different deployment options based on scale and latency requirements.
Pipelines: Creating and managing machine learning workflows, including data preprocessing, model training, evaluation, and deployment, as a series of connected steps.
Model Monitoring: Monitoring deployed models for performance degradation and potential issues (drift).
Feature Store: Centralized repository for storing, managing, and versioning features used in machine learning models, improving collaboration and reuse.
Pre-trained Models and APIs: Google offers numerous pre-trained models and APIs for various tasks, making it easier to integrate AI into applications without building models from scratch. Examples include:
Natural Language: Processing and understanding text (sentiment analysis, entity recognition, etc.).
Beyond the core platform and APIs, Google offers several specialized AI products:
TensorFlow: A popular open-source machine learning framework developed by Google. While not strictly a "Google Cloud" product, it's deeply integrated with their services.
Dialogflow: A conversational AI platform for building complex conversational experiences.
The platform where the machine learning community collaborates on models, datasets, and applications.
Hugging Face is a platform for researchers and developers to share, explore, and build AI models. It provides a centralized repository for models, datasets, and applications, making it easy to find, use, and contribute to the growing ecosystem of AI technologies.
Creating/deploy/customize a model
Pre-trained model, use behind the APIs, also a ML part, training model generation for use
MLflow provides tools for managing experiments, tracking model versions, deploying models to various environments, and managing models in a central registry. It's designed to be platform-agnostic, meaning it can work with many different cloud providers and even on-premises infrastructure.
We design payments technology that powers the growth of millions of businesses around the world. Engineering the next frontiers in payments technology
The AI should include support for providing guidance on using shell commands, navigating file systems, and executing command-line operations across different operating systems.
Proficiency in HTTP protocol, RESTful API concepts, and web service integration is crucial for the AI to provide support on API design, consumption, and troubleshooting common API-related issues.
Understanding of cloud computing principles, including basic concepts of cloud infrastructure, services, and deployment models, will enable the AI to offer guidance on cloud-based development, deployment, and best practices.
Large Language Model is a powerful type of AI model trained on massive datasets of text and code. LLMs can understand, generate, and manipulate language. Ex : ChatGPT, Bard, Codex
What are Large Language Models (LLMs)? by Google for developpers
',14),l("iframe",{width:"560",height:"315",src:"https://www.youtube.com/embed/iR2O2GPbB0E",title:"What are Large Language Models (LLMs)? by Google for developpers",frameborder:"0",allow:"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture",allowfullscreen:""},null,-1),a('
Multi-Modal Large Language Model is an advanced LLM that can process and generate both text and other data formats like images, audio, or video. Ex: DALL-E 2, Stable Diffusion (for image generation)
Machine Learning is a subset of AI that focuses on training algorithms to learn from data and make predictions or decisions without explicit programming. ML powers many AI applications, including image recognition, natural language processing, and predictive analytics.
Deep Learning is a type of ML that uses artificial neural networks with multiple layers to learn complex patterns from data.DL has revolutionized fields like computer vision, speech recognition, and machine translation.
A computational model inspired by the structure of the human brain, consisting of interconnected nodes (neurons) organized in layers.Neural networks are the core building blocks of deep learning models.
Natural Language Processing is a branch of Artificial Intelligence (AI) that focuses on enabling computers to understand, interpret, and generate human language. It implies :
Text Analysis
Language Understanding
Text Generation
Translation
Speech Recognition: Powers voice assistants and speech-to-text technologies
A specific set of instructions or questions given to an LLM to guide its response. Well-crafted prompts are crucial for getting accurate and relevant output from LLMs. Ex : "Write a Python function to check if a string is a palindrome."
The smallest unit of meaning processed by an LLM. Tokens can be words, parts of words, punctuation marks, or special characters. LLMs process text by analyzing sequences of tokens, making it important to understand how they are broken down. Ex : The sentence "I love programming" would be split into the following tokens: "I", "love", "programming".
A parameter in some LLMs that controls the randomness or creativity of the generated text. Adjust temperature based on the desired level of creativity or accuracy in the LLM's output.
A higher temperature generate more randomness and unpredictability in the output.
A lower temperature generate more predictable and coherent output.
RAG (Retrieval Augmented Generation) is a powerful technique in the field of Natural Language Processing (NLP) that combines the best of both worlds: information retrieval and language generation.
The system first retrieves relevant information from a vast knowledge base (often a database or a set of documents) based on the user's query or prompt.
This retrieved information is then used to augment the language model's input, providing it with more context and specific facts.
Finally, the language model uses this augmented input to generate a more comprehensive and informative response, leveraging both its knowledge base and its language generation capabilities.
AI's history has been marked by periods of progress and setbacks. Computing power, data availability, and algorithmic advancements have played crucial roles in AI's evolution. AI is no longer limited to expert systems but encompasses a wide range of techniques and applications.
1950: Alan Turing proposes the "Turing Test" to assess machine intelligence.
During the turing test, the human questioner asks a series of questions to both respondantes. After the specified time, the questionner tries to decide which terminal is operated bu the human respondant and which terminal is operated by the computer.
1956: Dartmouth Conference establishes the field of "Artificial Intelligence".
1959: Arthur Samuel develops a checkers-playing program that learns and improves over time.
1960s: Research focused on logic-based reasoning and early expert systems.
1972: The first expert system, DENDRAL, is developed for identifying organic molecules.
1980s-1990s: Development of new techniques like machine learning and neural networks.
1997: Deep Blue, a chess-playing computer, defeats Garry Kasparov, the world chess champion.
1990s-2000s: Advances in computing power, data availability, and algorithms as fuel for AI progress.
2010s: Deep learning revolutionizes AI with breakthroughs in image recognition, speech recognition, and natural language processing.
2011: Watson, an IBM supercomputer, wins Jeopardy! against human champions.
2016: AlphaGo, a program developed by Google DeepMind, defeats Go champion Lee Sedol.
2022: First release of ChatGPT : AI continues to evolve rapidly, with advancements in areas like autonomous vehicles, robotics, and personalized medicine.
',35)]))}const m=n(d,[["render",u],["__file","index.html.vue"]]),f=JSON.parse(`{"path":"/overview/","title":"Let's start","lang":"en-US","frontmatter":{},"headers":[{"level":2,"title":"Prerequisites","slug":"prerequisites","link":"#prerequisites","children":[]},{"level":2,"title":"Definitions","slug":"definitions","link":"#definitions","children":[{"level":3,"title":"Large Language Model (LLM)","slug":"large-language-model-llm","link":"#large-language-model-llm","children":[]},{"level":3,"title":"Multi-Modal LLM (MMLLM)","slug":"multi-modal-llm-mmllm","link":"#multi-modal-llm-mmllm","children":[]},{"level":3,"title":"Machine Learning (ML)","slug":"machine-learning-ml","link":"#machine-learning-ml","children":[]},{"level":3,"title":"Deep Learning (DL)","slug":"deep-learning-dl","link":"#deep-learning-dl","children":[]},{"level":3,"title":"Neural Network","slug":"neural-network","link":"#neural-network","children":[]}]},{"level":2,"title":"A bit of History","slug":"a-bit-of-history","link":"#a-bit-of-history","children":[]},{"level":2,"title":"The market","slug":"the-market","link":"#the-market","children":[]},{"level":2,"title":"🧪 Exercises","slug":"🧪-exercises","link":"#🧪-exercises","children":[]},{"level":2,"title":"📖 Further readings","slug":"📖-further-readings","link":"#📖-further-readings","children":[]}],"git":{"updatedTime":1732108523000,"contributors":[{"name":"Brah","username":"Brah","email":"brah.gharbi@gmail.com","commits":3,"url":"https://github.com/Brah"},{"name":"Ibrahim Gharbi","username":"Ibrahim Gharbi","email":"brah.gharbi@gmail.com","commits":4,"url":"https://github.com/Ibrahim Gharbi"}]},"filePathRelative":"overview/README.md"}`);export{m as comp,f as data};
diff --git a/assets/index.html-RxFH6ihY.js b/assets/index.html-RxFH6ihY.js
new file mode 100644
index 0000000..efe01cd
--- /dev/null
+++ b/assets/index.html-RxFH6ihY.js
@@ -0,0 +1,35 @@
+import{_ as a,c as t,a as n,o as s}from"./app-CpFOj0gG.js";const i="/learning-ai/assets/copilot_completion-CQ-GiHnM.gif",o="/learning-ai/assets/copilot_generation-B6N3Dysv.gif",r="/learning-ai/assets/copilot_assistance-Cotk3f4O.gif",l="/learning-ai/assets/copilot_assistance2-BM88VuDX.gif",p="/learning-ai/assets/copilot_testing-B1NBiis5.gif",c="/learning-ai/assets/copilot_test-CDuoz2Se.gif",d="/learning-ai/assets/copilot_diagram-BlfyAUjS.png",h="/learning-ai/assets/setupTestsCommand-DzBHVnH6.png",u="/learning-ai/assets/setupTests3-CFccRsxU.png",g="/learning-ai/assets/setupTests-BAGOiA2h.png",m="/learning-ai/assets/setupTests2-B4HTXFa5.png",b="/learning-ai/assets/fixTest-kUljAh5E.png",f="/learning-ai/assets/fixTest2-CyC7vvIQ.png",v="/learning-ai/assets/addFeature-aCwmUT5B.png",k="/learning-ai/assets/completeTest-QF26rGYC.png",C="/learning-ai/assets/refactor-BKrWyDxM.png",y="/learning-ai/assets/codeSuggestion-Bvi0fVx8.png",w="/learning-ai/assets/refactor2-CNVrrAH8.png",_="/learning-ai/assets/testSuggestion-C4I28c4d.png",x={};function A(I,e){return s(),t("div",null,e[0]||(e[0]=[n(`
Mermaid diagrams is a great tool to create diagrams in markdown files online. With copilot you can generate a diagram based on the code you write or chat with copilot to generate the diagram.
Generate a mermaid architecture diagram for a simple retail app with product catalog, product details , buying tunnel , loyalty. Be detailed. I want to make this app with express node JS Rest API server, please provide the diagram for the server side. Suggest technical stack on the diagram , name the component variables. Parentheses are not supported by mermaid.
+
graph TD;
+ A[Client Application] -->|HTTP Requests| B[Express.js REST API Server]
+ B -->|Fetch Products| C[Product Controller - productController]
+ B -->|Fetch Product Details| D[Product Details Controller - productDetailsController]
+ B -->|Handle Cart Operations| E[Cart Controller - cartController]
+ B -->|Process Orders| F[Order Controller - orderController]
+ B -->|Manage User Accounts| G[User Controller - userController]
+ B -->|Handle Loyalty Program| H[Loyalty Controller - loyaltyController]
+
+ subgraph Database
+ I[MongoDB - mongoose]
+ end
+
+ subgraph Middleware
+ J[Authentication Middleware - passport]
+ K[Error Handling Middleware - errorHandler]
+ L[Logging Middleware - morgan]
+ end
+
+ B -->|Connects to| I
+ B -->|Uses| J
+ B -->|Uses| K
+ B -->|Uses| L
+
+ C -->|CRUD Operations| I
+ D -->|CRUD Operations| I
+ E -->|CRUD Operations| I
+ F -->|CRUD Operations| I
+ G -->|CRUD Operations| I
+ H -->|CRUD Operations| I
+
Open Github Copilot Chat by clicking on the Copilot icon in the bottom right corner of your VSCode
Ask Copilot to generate unit tests for the index.js file . You can also try the /setupTests command
Copilot may make several suggestions: choosing a testing framework, adding a test command to package.json, install new dependencies. Accept all its suggestions.
Try to run the generated tests. In case of trouble, use Copilot Chat to ask for help.
Solution
Here we decided to go with supertest framework
Here is an example of how Copilot can help you fix a failing test:
Now we are going to use Copilot to refactor a piece of code in the same project.
Open the index.js file in the project
Ask Copilot to add a feature in the GET /movies endpoint that allows filtering movies by director, based on a director query parameter.
Copilot will generate the code for you. Try to understand the changes it made and run the project to test the new feature.
Ask Copilot to complete the unit test in index.test.js to test getting movies filtered by director. It should generate more unit tests that check against one of the directors in the example data.
Now we're going to refactor the code to extract the filtering logic into a separate function. Select the parts of the code with the .find() and .filter() function calls and ask Copilot to extract them into a new function. Let Copilot suggest a name for these functions
Under the previous generated function, type function filterMoviesByYear(. Wait for Copilot to suggest you the rest of the function signature and function body. Accept the suggestion using the Tab key.
Ask Copilot again to allow filtering movies by a year query parameter. Copilot should use the filterMoviesByYear function you just created to implement this feature.
Open index.test.js. In the GET /movies test block, add a new assertion block by typing it('should return movies filtered by year',. Wait for Copilot to suggest you the rest of the tests. Review code to make sure it uses the ?year query parameter and checks correctly a date from the example data.
Run the tests to make sure everything is working as expected. Use Copilot to ask for help if needed.
GitHub Spark is an AI-powered tool for creating and sharing micro apps (“sparks”), which can be tailored to your exact needs and preferences, and are directly usable from your desktop and mobile devices. Without needing to write or deploy any code.
And it enables this through a combination of three tightly-integrated components:
An NL-based editor, which allows easily describing your ideas, and then refining them over time
A managed runtime environment, which hosts your sparks, and provides them access to data storage, theming, and LLMs
A PWA-enabled dashboard, which lets you manage and launch your sparks from anywhere
',50)]))}const S=a(x,[["render",A],["__file","index.html.vue"]]),D=JSON.parse('{"path":"/develop/","title":"Develop with AI","lang":"en-US","frontmatter":{},"headers":[{"level":2,"title":"Github Copilot","slug":"github-copilot","link":"#github-copilot","children":[{"level":3,"title":"Copilot Chat","slug":"copilot-chat","link":"#copilot-chat","children":[]},{"level":3,"title":"CLI","slug":"cli","link":"#cli","children":[]},{"level":3,"title":"IDEs integration (VSCode)","slug":"ides-integration-vscode","link":"#ides-integration-vscode","children":[]}]},{"level":2,"title":"🧪 Exercise","slug":"🧪-exercise","link":"#🧪-exercise","children":[{"level":3,"title":"Install Github Copilot on VSCode","slug":"install-github-copilot-on-vscode","link":"#install-github-copilot-on-vscode","children":[]},{"level":3,"title":"Generating unit tests","slug":"generating-unit-tests","link":"#generating-unit-tests","children":[]},{"level":3,"title":"Refactoring","slug":"refactoring","link":"#refactoring","children":[]}]},{"level":2,"title":"Gihub Spark","slug":"gihub-spark","link":"#gihub-spark","children":[]},{"level":2,"title":"🧪 Exercise","slug":"🧪-exercise-1","link":"#🧪-exercise-1","children":[]},{"level":2,"title":"📖 Further readings","slug":"📖-further-readings","link":"#📖-further-readings","children":[]}],"git":{"updatedTime":1734079658000,"contributors":[{"name":"Brah","username":"Brah","email":"brah.gharbi@gmail.com","commits":10,"url":"https://github.com/Brah"},{"name":"Sylvain Pollet-Villard","username":"Sylvain Pollet-Villard","email":"sylvain.pollet.villard@gmail.com","commits":1,"url":"https://github.com/Sylvain Pollet-Villard"},{"name":"Ibrahim Gharbi","username":"Ibrahim Gharbi","email":"brah.gharbi@gmail.com","commits":3,"url":"https://github.com/Ibrahim Gharbi"}]},"filePathRelative":"develop/README.md"}');export{S as comp,D as data};
diff --git a/assets/index.html-ePbs5NrP.js b/assets/index.html-ePbs5NrP.js
new file mode 100644
index 0000000..01ce242
--- /dev/null
+++ b/assets/index.html-ePbs5NrP.js
@@ -0,0 +1,91 @@
+import{_ as s,c as a,a as e,o as t}from"./app-CpFOj0gG.js";const o="/learning-ai/assets/cot-CrIEIz7R.png",i="/learning-ai/assets/react-BB8mFVUF.png",p={};function l(r,n){return t(),a("div",null,n[0]||(n[0]=[e(`
Prompt engineering involves the design and creation of prompts that are used to elicit specific responses or actions from AI models or interactive systems. These prompts are carefully crafted to guide the behavior or generate particular outputs from the AI, such as generating natural language responses, providing recommendations, or completing specific tasks.
In the context of AI language models, prompt engineering is especially important for shaping the model's behavior and output. By designing prompts effectively, engineers can influence the model's responses and ensure that it generates coherent, relevant, and accurate content.
There are four main areas to consider when writing an effective prompt. You don’t need to use all four, but using a few will help!
Persona: Who is the user you're writing for? What are their skills and knowledge?
Task: What specific action do you want the user to perform?
Context: What information does the user need to know to complete the task?
Format: What is the desired output of the task?
Example Prompt:
[Persona] You are a Google Cloud program manager.
[Task] Draft an executive summary email
[Context] to [person description] based on [details about relevant program docs].
[Format] Limit to bullet points.
By using "act as," you are establishing a specific context for the language model and guiding it to understand the type of task or request you are making. This helps to set the right expectations and provides the language model with the necessary context to generate a response tailored to the defined role.
"Act as a creative writing assistant and generate a short story based
+on a prompt about a futuristic world where robots have become sentient."
+
Introduced in Wei et al. (2022), chain-of-thought (CoT) prompting enables complex reasoning capabilities through intermediate reasoning steps. You can combine it with few-shot prompting to get better results on more complex tasks that require reasoning before responding. Prompting Guide with CoT
Yao et al., 2022 introduced a framework named ReAct where LLMs are used to generate both reasoning traces and task-specific actions in an interleaved manner.
Generating reasoning traces allow the model to induce, track, and update action plans, and even handle exceptions. The action step allows to interface with and gather information from external sources such as knowledge bases or environments.
The ReAct framework can allow LLMs to interact with external tools to retrieve additional information that leads to more reliable and factual responses. Prompting Guide with CoT
Summary is a prompt engineering technique that involves providing a summary of a given document or text. It can helps for summarizing changelogs, articles, or other technical documents.
Help me write an article of this document [Insert or copy paste document text]
+Generate 5 titles out of the following topic….
+Generate a subtitle to catch readers’ attention on the following
+topic [describe the topic]
+
Write is a prompt engineering technique that involves providing a step-by-step guide or instructions for a given task or process. Its useful for developers to create functional and technical documentations.
Create a template of an email response to customer inquiring about ….
+Create a guide that explains how to use ….
+Write step by step instructions
+
Code explanation is a prompt engineering technique that involves providing a detailed explanation of a code snippet or function. This technique is useful for developers who want to understand the inner workings of a codebase or for those who want to document their code.
cf. Preformatted prompts for an example of code explanation
Create a function that calculates the factorial of a number.
Handle both positive integers and zero, with error handling for negative inputs.
Expected Output (python)
deffactorial(n):
+if n <0:
+raise ValueError("Input must be a non-negative integer.")
+if n ==0:
+return1
+ result =1
+for i inrange(1, n +1):
+ result *= i
+return result
+
Solutions
Persona: Python Developer Task: Create a function Context: You need to calculate the factorial of a number.
As a Python Developer, create a function named factorial that takes a single integer input and returns its factorial. The function should handle both positive integers and zero. Include error handling for negative inputs.
Persona: JavaScript Developer Task: Write a function to handle API requests Context: You need to fetch data from a given URL.
As a JavaScript Developer, write a function named fetchData that takes a URL as an argument and fetches data from that URL using the Fetch API. The function should return the JSON response and handle any errors that may occur during the fetch operation.
Persona: C# Developer Task: Define a class Context: You are creating a representation of a book.
As a C# Developer, create a class named Book that has properties for Title, Author, and PublicationYear. Include a method named DisplayDetails that prints the book's details in a formatted string.
Persona: Ruby Developer Task: Write a validation method Context: You need to validate email addresses.
As a Ruby Developer, write a method named valid_email? that takes a string as input and returns true if it is a valid email address, and false otherwise. Use a regular expression for validation.
Code completion is a prompt engineering technique that involves providing a list of possible completions for a given code snippet or function. This technique is useful for developers who want to suggest possible code changes or improvements based on their existing code.
Code conversion is a prompt engineering technique that involves providing a conversion of a code snippet or function from one programming language to another. This technique is useful for developers who want to migrate their code from one language to another or for those who want to use a different programming language for their projects.
Code review is a prompt engineering technique that involves providing a code review of a given code snippet or function. This technique is useful for developers who want to review their code for potential issues,bugs, or for those who want to provide feedback on their code.
Code fixing is a prompt engineering technique that involves providing a code fix for a given code snippet or function. This technique is useful for developers who want to fix bugs or issues in their code or for those who want to improve the quality of their code.
Help me find mistakes in my code [insert your code]
+Explain what this snippet of code does [insert code snippet]
+What it the correct syntax for a [statement or function]
+in [programming language]
+How do I fix the following programming language code
+[program language] code which explain the functioning [Insert code snippet]
+
Code refactor is a prompt engineering technique that involves providing a code refactoring of a given code snippet or function within a specific scope. This technique is useful for developers who want to refactor their code within a specific context or for those who want to improve the readability and maintainability of their code.
Mock data generation is a prompt engineering technique that involves providing a mock data set for a given code snippet or function. This technique is useful for developers who want to test their code with mock data or for those who want to generate test data for their projects. It avoid creating manually fake data for testing.
Create prompts that can generate mock user profiles. The language used is JavaScript.
The profile should include:
Name
Age
Email
Address (Street, City, State, Zip Code)
Phone Number
Solutions
Mock Data Generation
As a JavaScript Developer, write a function named generateUserProfile that generates a mock user profile with the following details: name, age, email, address, and phone number. The function should return an object containing the user profile details.
Testing is a prompt engineering technique that involves providing a test case for a given code snippet or function. This technique is useful for developers who want to test their code or for those who want to ensure the correctness of their code.
System design and architecture is a prompt engineering technique that involves providing a system design or architecture for a given code snippet or function. This technique is useful for developers who want to design their code or for those who want to understand the overall architecture of their projects.
Documentation generation is a prompt engineering technique that involves providing a documentation for a given code snippet or function. This technique is useful for developers who want to document their code or for those who want to provide documentation for their projects. It can be used to generate documentation in various formats such as Markdown, HTML, or PDF.
Commit message generation is a prompt engineering technique that involves providing a commit message for a given code snippet or function. This technique is useful for developers who want to generate commit messages for their code or for those who want to ensure that their commit messages are clear and concise.
Vulnerability checking is a prompt engineering technique that involves providing a vulnerability check for a given code snippet or function. This technique is useful for developers who want to check for vulnerabilities in their code or for those who want to ensure that their code is secure.
Warning
This prompt is not recommended for production use. It is intended for testing and debugging purposes only and is not a proof of security or safety of your app.
You can understand complex regular expressions and generate ones that match specific patterns in text. This technique is useful for developers who want to write complex regular expressions or for those who want to understand the syntax of regular expressions.
Explain this regular expression in JavaScript: const regex =
+/^[A-Za-z0–9._%+-]+@[A-Za-z0–9.-]+\\\\.[A-Za-z]{2,}$/;
+
Mermaid diagrams is a great tool to create diagrams in markdown files online. With copilot you can generate a diagram based on the code you write or chat with copilot to generate the diagram.
Generate a mermaid architecture diagram for a simple retail app with product catalog, product details , buying tunnel , loyalty. Be detailed. I want to make this app with express node JS Rest API server, please provide the diagram for the server side. Suggest technical stack on the diagram , name the component variables. Parentheses are not supported by mermaid.
+
graph TD;
+ A[Client Application] -->|HTTP Requests| B[Express.js REST API Server]
+ B -->|Fetch Products| C[Product Controller - productController]
+ B -->|Fetch Product Details| D[Product Details Controller - productDetailsController]
+ B -->|Handle Cart Operations| E[Cart Controller - cartController]
+ B -->|Process Orders| F[Order Controller - orderController]
+ B -->|Manage User Accounts| G[User Controller - userController]
+ B -->|Handle Loyalty Program| H[Loyalty Controller - loyaltyController]
+
+ subgraph Database
+ I[MongoDB - mongoose]
+ end
+
+ subgraph Middleware
+ J[Authentication Middleware - passport]
+ K[Error Handling Middleware - errorHandler]
+ L[Logging Middleware - morgan]
+ end
+
+ B -->|Connects to| I
+ B -->|Uses| J
+ B -->|Uses| K
+ B -->|Uses| L
+
+ C -->|CRUD Operations| I
+ D -->|CRUD Operations| I
+ E -->|CRUD Operations| I
+ F -->|CRUD Operations| I
+ G -->|CRUD Operations| I
+ H -->|CRUD Operations| I
+
Open Github Copilot Chat by clicking on the Copilot icon in the bottom right corner of your VSCode
Ask Copilot to generate unit tests for the index.js file . You can also try the /setupTests command
Copilot may make several suggestions: choosing a testing framework, adding a test command to package.json, install new dependencies. Accept all its suggestions.
Try to run the generated tests. In case of trouble, use Copilot Chat to ask for help.
Solution
Here we decided to go with supertest framework
Here is an example of how Copilot can help you fix a failing test:
Now we are going to use Copilot to refactor a piece of code in the same project.
Open the index.js file in the project
Ask Copilot to add a feature in the GET /movies endpoint that allows filtering movies by director, based on a director query parameter.
Copilot will generate the code for you. Try to understand the changes it made and run the project to test the new feature.
Ask Copilot to complete the unit test in index.test.js to test getting movies filtered by director. It should generate more unit tests that check against one of the directors in the example data.
Now we're going to refactor the code to extract the filtering logic into a separate function. Select the parts of the code with the .find() and .filter() function calls and ask Copilot to extract them into a new function. Let Copilot suggest a name for these functions
Under the previous generated function, type function filterMoviesByYear(. Wait for Copilot to suggest you the rest of the function signature and function body. Accept the suggestion using the Tab key.
Ask Copilot again to allow filtering movies by a year query parameter. Copilot should use the filterMoviesByYear function you just created to implement this feature.
Open index.test.js. In the GET /movies test block, add a new assertion block by typing it('should return movies filtered by year',. Wait for Copilot to suggest you the rest of the tests. Review code to make sure it uses the ?year query parameter and checks correctly a date from the example data.
Run the tests to make sure everything is working as expected. Use Copilot to ask for help if needed.
GitHub Spark is an AI-powered tool for creating and sharing micro apps (“sparks”), which can be tailored to your exact needs and preferences, and are directly usable from your desktop and mobile devices. Without needing to write or deploy any code.
And it enables this through a combination of three tightly-integrated components:
An NL-based editor, which allows easily describing your ideas, and then refining them over time
A managed runtime environment, which hosts your sparks, and provides them access to data storage, theming, and LLMs
A PWA-enabled dashboard, which lets you manage and launch your sparks from anywhere
We design payments technology that powers the growth of millions of businesses around the world. Engineering the next frontiers in payments technology
LibreChat is a free, open source AI chat platform. This Web UI offers vast customization, supporting numerous AI providers, services, and integrations. Serves all AI Conversations in one place with a familiar interface, innovative enhancements, for as many users as you need.
The full librechat documentation is available here
Let's discover how to use LibreChat to create efficient and effective conversations with AI for developers.
Prompts history allows users to save and load prompts for their conversations and easily access them later. Reusing prompts can save time and effort, especially when working with multiple conversations and keep track of the context and details of a conversation.
The presets feature allows users to save and load predefined settings for initialise a conversations. Users can import and export these presets as JSON files, set a default preset, and share them with others.
The prompts feature allows users to save and load predefined prompts to use it during their conversations. You can use a prompt with the /[prompt command]. A prompt can have parameters, which are replaced with values when the prompt is used.
Exemple of preformatted prompts : Explain the following code snippet in Java, Kotlin or Javascript
Click on the + button to add a new prompt.
name your prompt : explain
on Text tab, you can write your prompt :
Explain the following {{language:Java|Kotlin|Javascript}} snippet of code:
+{{code}}
+
Now you can use the /explain command to get the explanation of the code snippet.
Azure OpenAI Service provides REST API access to OpenAI's powerful language models, including the o1-preview, o1-mini, GPT-4o, GPT-4o mini, GPT-4 Turbo with Vision, GPT-4, GPT-3.5-Turbo, and Embeddings model series.
Gemini is a large language model (LLM) developed by Google. It's designed to be a multimodal AI, meaning it can work with and understand different types of information, including text, code, audio, and images. Google positions Gemini as a highly capable model for a range of tasks, from answering questions and generating creative content to problem-solving and more complex reasoning. There are different versions of Gemini, optimized for different tasks and scales.
Claude is an Artificial Intelligence, trained by Anthropic. Claude can process large amounts of information, brainstorm ideas, generate text and code, help you understand subjects, coach you through difficult situations, help simplify your busywork so you can focus on what matters most, and so much more.
The Assistants API enables the creation of AI assistants, offering functionalities like code interpreter, knowledge retrieval of files, and function execution. The Assistants API allows you to build AI assistants within your own applications for specific needs. An Assistant has instructions and can leverage models, tools, and files to respond to user queries. The Assistants API currently supports three types of tools: Code Interpreter, File Search, and Function calling.
The plugins endpoint opens the door to prompting LLMs in new ways other than traditional input/output prompting.
Warning
Every additional plugin selected will increase your token usage as there are detailed instructions the LLM needs for each one For best use, be selective with plugins per message and narrow your requests as much as possible
Dall-e 3 is a librechat Plugin for generating images from text. You can use it to generate images from text, such as product descriptions, product images, or even documentation images to illustrate your technical documentation.
Wolf is a librechat Plugin for WL Managagement System documents. The sharepoint documention is available here
Ask to WorldLine management system Friend everything you are looking for in the WMS content. AskWOLF plugin is meant to help you navigate through the multitude of information provided by the WMS (Applicable Policies, Processes & Procedures, Transversal & Operations SP pages links, …). This Worldline LibreChat plugin relies on ChatGPT technologies.
Worldline Management System (WMS) is the Group reference for all information pertaining to our operating model such as applicable policies, processes and governance structures. Key responsibilities are :
consistently address its customers’ and markets’ requirements across all its geographies
continuous improvement of customer satisfaction through effective application of WMS
correct interpretation of applicable ISO standards requirements
You can mix plugins to create more complex prompts. For example, you can use the DALL-E 3 plugin to generate images from text and then use the IT support plugin to get support from the IT team.
Generate the favicon 16x16 pixels based on the content found in
+https://worldline.github.io/learning-ai/overview/ with Browser plugin
+and generate the favicon with DallE. I want no background and black and white image
+
RAG is possible with LibreChat. You can use RAG to create a conversation with the AI. To can add files to the conversation, you go to the file tab and select the file you want to add. Then the file will be added to the file manager and you can use it in the prompt.
The file can be an png, a video, a text file, or a PDF file.
Choose your favorite topic ( cooking, travel, sports, etc.) and create an assistant that can answer questions about it. You can share documents, files and instructions to configure your custom assistant and use it.
Be careful with offline prompting models downloaded from the internet. They can contain malicious code. And also the size of the model can be very large from few Gb to few Tb.
If you don't want to use the online AI providers, you can use offline prompting. This technique involves using a local LLM to generate responses to prompts. It is useful for developers who want to use a local LLM for offline prompting or for those who want to experiment with different LLMs without relying on online providers.
LM Studio is a tool that allows developers to experiment with different LLMs without relying on online providers. It provides a user-friendly interface for selecting and configuring LLMs, as well as a chat interface for interacting with the LLMs. It also includes features for fine-tuning and deploying LLMs. This technique is useful for developers who want to experiment with different LLMs.
You can configure the model you want to use in the settings tab. You can select the model you want to use and configure it according to your needs.
Context Length: The context length is the number of tokens that will be used as context for the model. This is important because it determines how much information the model can use to generate a response. A longer context length will allow the model to generate more detailed and relevant responses, but it may also increase the computational cost of the model.
GPU Offload: This option allows you to offload the model to a GPU if available. This can significantly speed up the generation process, especially for longer prompts or complex models.
CPU Threads: This option allows you to specify the number of CPU threads to use for the model. This can be useful for controlling the computational resources used by the model.
Evaluation batch size: This option allows you to specify the batch size for evaluation. This is important for evaluating the performance of the model and can affect the speed and accuracy of the generation process.
RoPE Frequency base: This option allows you to specify the frequency base for RoPE (Range-based Output Embedding). This is important for controlling the output length of the model and can affect the quality of the generated responses.
RoPE Frequency scale: This option allows you to specify the frequency scale for RoPE (Range-based Output Embedding). This is important for controlling the output length of the model and can affect the quality of the generated responses.
Keep model in memory: This option allows you to keep the model in memory after the generation process is complete. This can be useful for generating multiple responses or for using the model for offline prompting.
Try mmap() for faster loading: This option allows you to try using mmap() for faster loading of the model. This can be useful for loading large models or for generating responses quickly.
Seed: This option allows you to specify a seed for the model. This can be useful for controlling the randomness of the generated responses.
Flash Attention: This option allows you to enable flash attention for the model. This can be useful for generating more detailed and accurate responses, but it may also increase the computational cost of the model.
You can use the APIs to generate responses from the models. To enable the API server with LM Studio, you need to set the API Server option to ON in the settings tab. You can then use the API endpoints to generate responses from the models.
2024-11-15 18:45:22 [INFO][LM STUDIO SERVER] Success! HTTP server listening on port 1234
+2024-11-15 18:45:22 [INFO]
+2024-11-15 18:45:22 [INFO][LM STUDIO SERVER] Supported endpoints:
+2024-11-15 18:45:22 [INFO][LM STUDIO SERVER] -> GET http://localhost:1234/v1/models
+2024-11-15 18:45:22 [INFO][LM STUDIO SERVER] -> POST http://localhost:1234/v1/chat/completions
+2024-11-15 18:45:22 [INFO][LM STUDIO SERVER] -> POST http://localhost:1234/v1/completions
+2024-11-15 18:45:22 [INFO][LM STUDIO SERVER] -> POST http://localhost:1234/v1/embeddings
+2024-11-15 18:45:22 [INFO]
+2024-11-15 18:45:22 [INFO][LM STUDIO SERVER] Logs are saved into /Users/ibrahim/.cache/lm-studio/server-logs
+2024-11-15 18:45:22 [INFO] Server started.
+2024-11-15 18:45:22 [INFO] Just-in-time model loading active.
+
You can use the endpoints to generate responses from the models. The endpoints are as follows:
GET /v1/models: This endpoint returns a list of the available models.
POST /v1/chat/completions: This endpoint generates responses from the models using the chat format.Chat format is used for tasks such as chatbots, conversational AI, and language learning.
POST /v1/completions: This endpoint generates responses from the models using the completion format. Completion format is used for tasks such as question answering, summarization, and text generation.
POST /v1/embeddings: This endpoint generates embeddings from the models. Embeddings are used for tasks such as sentiment analysis, text classification, and language translation.
The AI should include support for providing guidance on using shell commands, navigating file systems, and executing command-line operations across different operating systems.
Proficiency in HTTP protocol, RESTful API concepts, and web service integration is crucial for the AI to provide support on API design, consumption, and troubleshooting common API-related issues.
Understanding of cloud computing principles, including basic concepts of cloud infrastructure, services, and deployment models, will enable the AI to offer guidance on cloud-based development, deployment, and best practices.
Large Language Model is a powerful type of AI model trained on massive datasets of text and code. LLMs can understand, generate, and manipulate language. Ex : ChatGPT, Bard, Codex
What are Large Language Models (LLMs)? by Google for developpers
Multi-Modal Large Language Model is an advanced LLM that can process and generate both text and other data formats like images, audio, or video. Ex: DALL-E 2, Stable Diffusion (for image generation)
Machine Learning is a subset of AI that focuses on training algorithms to learn from data and make predictions or decisions without explicit programming. ML powers many AI applications, including image recognition, natural language processing, and predictive analytics.
Deep Learning is a type of ML that uses artificial neural networks with multiple layers to learn complex patterns from data.DL has revolutionized fields like computer vision, speech recognition, and machine translation.
A computational model inspired by the structure of the human brain, consisting of interconnected nodes (neurons) organized in layers.Neural networks are the core building blocks of deep learning models.
Natural Language Processing is a branch of Artificial Intelligence (AI) that focuses on enabling computers to understand, interpret, and generate human language. It implies :
Text Analysis
Language Understanding
Text Generation
Translation
Speech Recognition: Powers voice assistants and speech-to-text technologies
A specific set of instructions or questions given to an LLM to guide its response. Well-crafted prompts are crucial for getting accurate and relevant output from LLMs. Ex : "Write a Python function to check if a string is a palindrome."
The smallest unit of meaning processed by an LLM. Tokens can be words, parts of words, punctuation marks, or special characters. LLMs process text by analyzing sequences of tokens, making it important to understand how they are broken down. Ex : The sentence "I love programming" would be split into the following tokens: "I", "love", "programming".
A parameter in some LLMs that controls the randomness or creativity of the generated text. Adjust temperature based on the desired level of creativity or accuracy in the LLM's output.
A higher temperature generate more randomness and unpredictability in the output.
A lower temperature generate more predictable and coherent output.
RAG (Retrieval Augmented Generation) is a powerful technique in the field of Natural Language Processing (NLP) that combines the best of both worlds: information retrieval and language generation.
The system first retrieves relevant information from a vast knowledge base (often a database or a set of documents) based on the user's query or prompt.
This retrieved information is then used to augment the language model's input, providing it with more context and specific facts.
Finally, the language model uses this augmented input to generate a more comprehensive and informative response, leveraging both its knowledge base and its language generation capabilities.
AI's history has been marked by periods of progress and setbacks. Computing power, data availability, and algorithmic advancements have played crucial roles in AI's evolution. AI is no longer limited to expert systems but encompasses a wide range of techniques and applications.
1950: Alan Turing proposes the "Turing Test" to assess machine intelligence.
During the turing test, the human questioner asks a series of questions to both respondantes. After the specified time, the questionner tries to decide which terminal is operated bu the human respondant and which terminal is operated by the computer.
1956: Dartmouth Conference establishes the field of "Artificial Intelligence".
1959: Arthur Samuel develops a checkers-playing program that learns and improves over time.
1960s: Research focused on logic-based reasoning and early expert systems.
1972: The first expert system, DENDRAL, is developed for identifying organic molecules.
1980s-1990s: Development of new techniques like machine learning and neural networks.
1997: Deep Blue, a chess-playing computer, defeats Garry Kasparov, the world chess champion.
1990s-2000s: Advances in computing power, data availability, and algorithms as fuel for AI progress.
2010s: Deep learning revolutionizes AI with breakthroughs in image recognition, speech recognition, and natural language processing.
2011: Watson, an IBM supercomputer, wins Jeopardy! against human champions.
2016: AlphaGo, a program developed by Google DeepMind, defeats Go champion Lee Sedol.
2022: First release of ChatGPT : AI continues to evolve rapidly, with advancements in areas like autonomous vehicles, robotics, and personalized medicine.
You can use Google collab for a simple to use notebook environment for machine learning and data science. It will provide a container with all the necessary libraries and tools to run your code and live editing interface through a browser.
A notebook is a document that contains live code, equations, visualizations, and narrative text. You can use Colab to create, share, and collaborate on Jupyter notebooks with others.
User interraction with collab
You can store your API keysafely in the userdata of your colab environment. Also you can upload files to your colab environment as follows:
+from google.colab import files
+from google.colab import userdata # For retrieving API keys
+
+# 1. Upload the file to your current colab environment ( a upload button will appear at the execution of the code)
+uploaded = files.upload()
+for fn in uploaded.keys():
+print('User uploaded file "{name}" with length {length} bytes'.format(
+ name=fn, length=len(uploaded[fn])))
+
+# get the API key from colab userdata ( left panel of colla, picto with the key)
+api_key=userdata.get('API_KEY')
+
+
Langchain is a framework for building applications powered by language models (LLMs) like OpenAI's GPT-3. It provides a set of tools and utilities for working with LLMs, including prompt engineering, chain of thought, and memory management. Langchain is designed to be modular and extensible, allowing developers to easily integrate with different LLMs and other AI services.
Json mode is a feature that allows you to send structured data to the model through the API instead of a text prompt. To use Json mode, you need to select the right endpoint in the API explorer and specify the input format as JSON in the prompt.
For OpenAI API, you can use the following format :
{
+"model":"text-davinci-003",
+"prompt":"Translate the following text to French: 'Hello, how are you?'",
+"max_tokens":100
+}
+
curl-H"Authorization: Bearer <your_api_key>"-H"Content-Type: application/json"-d'{"model": "text-davinci-003", "prompt": "Translate the following text to French: 'Hello, how are you?'", "max_tokens": 100}' https://api.mistral.ai/v1/chat/completions
+
+{
+"id":"chatcmpl-123456789",
+"object":"chat.completion",
+"created":1679341456,
+"model":"text-davinci-003",
+"choices":[
+{
+"index":0,
+"message":{
+"role":"assistant",
+"content":"Bonjour, comment ça va?"
+},
+"finish_reason":"stop"
+}
+],
+"usage":{
+"prompt_tokens":5,
+"completion_tokens":7,
+"total_tokens":12
+}
+}
+
Structured outputs are a feature that allows you to receive structured data from the model through the API. It is useful for working with models that require structured outputs, such as JSON.
To use structured outputs, you need to select the right endpoint in the API explorer and specify the output format in the prompt.
for OpenAI API, you can use the following format :
{
+"model":"text-davinci-003",
+"prompt":"Translate the following text to French: 'Hello, how are you?'",
+"max_tokens":100,
+"output":"json"
+}
+
the structured output can be as follow :
{
+"model":"text-davinci-003",
+"prompt":"Translate the following text to French: 'Hello, how are you?'",
+"max_tokens":100,
+"output":{
+"text":"Bonjour, comment ça va?"
+}
+}
+
+
Create a Python application that generates humorous motivational quotes for developers based on their name, favorite programming language, and a brief description of their current project or challenge.
Library for making API calls
You can use requests for making API calls in Python.
Expected Output
Enter your name: Ibrahim
+Enter your favorite programming language: kotlin
+Enter your current project description: conference app with KMP
+
+--- Motivational Quote ---
+Quote: "Code like you just ate a burrito... with passion, speed, and a little bit of mess!"
+Author: Unknown
+--------------------------
+
Depending on the LLM, langchain provides different APIs. Have a look at the following table here to see which APIs are available for your LLM.
Model Features
Tool Calling
Structured Output
JSON Mode
Image Input
Audio Input
Video Input
✅
✅
✅
❌
❌
❌
To use langchain with mistral, you need to install the langchain_mistralai package and create a ChatMistralAI object.
from langchain_mistralai.chat_models import ChatMistralAI
+# Define your API key and model
+API_KEY ='your_api_key'# Replace with your actual Mistral API key
+MISTRAL_API_URL ='https://api.mistral.ai/v1/chat/completions'
+llm = ChatMistralAI(api_key=API_KEY, model="open-mistral-7b")
+
Prompt templating is a powerful feature that allows you to create dynamic prompts based on the input data. It enables you to generate prompts that are tailored to the specific requirements of your application.
from langchain.prompts import PromptTemplate
+
+prompt = PromptTemplate(
+ input_variables=["text","language"],
+ template="translate the following text to {language}: {text}",
+)
+
Chain Chains refer to sequences of calls - whether to an LLM, a tool, or a data preprocessing step. It is a sequence of calls that are executed in order, with the output of one call being the input for the next call.It enables you to create complex workflows by combining the output of one LLM call with the input of another. This is useful for tasks that require multiple steps or interactions with external systems.
from langchain.chains import LLMChain
+
+input_data ={
+"text":"Hello, how are you?",
+"language":"French"
+}
+
+chain = prompt | llm_model
+response=chain.invoke(input_data)
+
Multiple prompt can be chained together to create complex workflows.
Create a Python application that generates humorous motivational quotes for developers based on their name, favorite programming language, and a brief description of their current project or challenge.
Expected Output
Enter your name: Ibrahim
+Enter your favorite programming language: kotlin
+Enter your current project description: conference app with KMP
+
+--- Motivational Quote ---
+Quote: "Code like you just ate a burrito... with passion, speed, and a little bit of mess!"
+Author: Unknown
+--------------------------
+
Steps
Create a function get_developer_motivation(name, language, project_description) that:
Takes a developer's name, their favorite programming language, and a brief description of their current project or challenge as input.
Uses langchain to send a request to the LLM to generate a humorous motivational quote.
Returns a structured response containing the quote, the developer's name, the programming language, and the project description.
Function/Tool calling is a feature that allows the llm to call existing functions from your code. It is useful for working with functions, such as APIs, and for interacting with models that require function calls. Once a tool function is created, you can register it as a tool within LangChain for being used by the LLM.
Build a command-line application that fetches weather data for a specified city using LangChain and a public weather API. The application will utilize implicit tool calling to allow the LLM to decide when to call the weather-fetching tool based on user input.
Ask about the weather (e.g., 'Lille, France'): Paris
+
+------------------------------------------------------------------------------
+The current weather in Paris is: overcast clouds with a temperature of 6.63°C.
+------------------------------------------------------------------------------
+
Configuration
Sign up for an API key from a weather service provider (e.g., OpenWeatherMap).
Define a function fetch_weather(city: str) -> dict that takes a city name as input and returns the weather data as a dictionary. Use the weather API to fetch the data.
Register the Weather Tool
Use the Tool class from LangChain to register the fetch_weather function as a tool.
Set Up the LangChain Components
Create a prompt template that asks about the weather in a specified city.
Instantiate the ChatMistralAI model with your Mistral API key.
Create a chain that combines the prompt template, the chat model, and the registered weather tool.
Handle User Input
Implement a function handle_user_input(city) that:
llama-index is a powerful tool for building and deploying RAG (Retrieval Augmented Generation) applications. It provides a simple and efficient way to integrate LLMs into your applications, allowing you to retrieve relevant information from a large knowledge base and use it to generate responses. RAG is a technique that leverages the power of LLMs to augment human-generated content.
Unstructured documents are a common source of information for RAG applications. These documents can be in various formats, such as text, PDF, HTML, or images. LlamaIndex provides tools for indexing and querying unstructured documents, enabling you to build powerful RAG applications that can retrieve information from a large corpus of documents.
Structured Data is another common source of information for RAG applications. This data is typically stored in databases or spreadsheets and can be queried using SQL or other query languages. LlamaIndex provides tools for connecting LLMs to databases and querying structured data, allowing you to build RAG applications that can retrieve information from databases.
#The database library used in this example is SQLAlchemy
+sql_database = SQLDatabase(engine, include_tables=["books"])
+query_engine = NLSQLTableQueryEngine(
+ sql_database=sql_database,
+ tables=["books"],
+ llm=llm,
+ embed_model=embed_model,
+)
+
+query_engine.query("Who wrote 'To Kill a Mockingbird'?")
+
Create a Python application that provide a txt document containings a list of application comments and make sentiment analysis on it with llama-index.
Your customer review txt file :
Review 1: I was very disappointed with the product. It did not meet my expectations.
+Review 2: The service was excellent! I highly recommend this company.
+Review 3: I had a terrible experience. The product was faulty, and the customer support was unhelpful.
+Review 4: I am extremely satisfied with my purchase. The quality is outstanding.
+
Expected Shell Output:
Saving customer_reviews.txt to customer_reviews (4).txt
+User uploaded file"customer_reviews (4).txt" with length 338 bytes
+The customers' experiences with the company and its products vary. Some have had positive experiences, such as excellent service and high-quality products, while others have encountered issues with faulty products and unhelpful customer support.
+
Create a Python application that initializes a list of languages and their creators with sqlalchemy and requests the LLM to retrieve the creators of a language. The LLM should be able to understand the context and retrieve the relevant information from the database.
GCP is a suite of cloud computing services provided by Google. It includes a wide range of tools and services for building and consuming LLMs, such as Vertex AI, Google Colab, and ML Flow.
Gemini: Google's large language model (LLM), positioned as a competitor to OpenAI's GPT models. Gemini's capabilities are integrated into various Google products and services, and are also accessible through APIs. Different versions of Gemini (e.g., Gemini Pro, Gemini Ultra) offer varying levels of capability and access. It powers several consumer-facing features across Google's ecosystem.
AI Studio: Cloud-based machine learning platform offered by several companies, most notably Google with its Google AI Studio (now Vertex AI Studio). It provides APIs for leading foundation models, and tools to rapidly prototype, easily tune models with your own data, and seamlessly deploy to applications.
This is the central hub for most Google Cloud's AI/ML services. It integrates and supersedes many previous offerings.
Custom Training: Training machine learning models using various algorithms and frameworks (TensorFlow, PyTorch, scikit-learn, XGBoost, etc.). Provides access to managed compute instances (including TPUs).
Prediction: Deploying trained models for inference (making predictions). Offers different deployment options based on scale and latency requirements.
Pipelines: Creating and managing machine learning workflows, including data preprocessing, model training, evaluation, and deployment, as a series of connected steps.
Model Monitoring: Monitoring deployed models for performance degradation and potential issues (drift).
Feature Store: Centralized repository for storing, managing, and versioning features used in machine learning models, improving collaboration and reuse.
Pre-trained Models and APIs: Google offers numerous pre-trained models and APIs for various tasks, making it easier to integrate AI into applications without building models from scratch. Examples include:
Natural Language: Processing and understanding text (sentiment analysis, entity recognition, etc.).
Beyond the core platform and APIs, Google offers several specialized AI products:
TensorFlow: A popular open-source machine learning framework developed by Google. While not strictly a "Google Cloud" product, it's deeply integrated with their services.
Dialogflow: A conversational AI platform for building complex conversational experiences.
The platform where the machine learning community collaborates on models, datasets, and applications.
Hugging Face is a platform for researchers and developers to share, explore, and build AI models. It provides a centralized repository for models, datasets, and applications, making it easy to find, use, and contribute to the growing ecosystem of AI technologies.
Creating/deploy/customize a model
Pre-trained model, use behind the APIs, also a ML part, training model generation for use
MLflow provides tools for managing experiments, tracking model versions, deploying models to various environments, and managing models in a central registry. It's designed to be platform-agnostic, meaning it can work with many different cloud providers and even on-premises infrastructure.
Prompt engineering involves the design and creation of prompts that are used to elicit specific responses or actions from AI models or interactive systems. These prompts are carefully crafted to guide the behavior or generate particular outputs from the AI, such as generating natural language responses, providing recommendations, or completing specific tasks.
In the context of AI language models, prompt engineering is especially important for shaping the model's behavior and output. By designing prompts effectively, engineers can influence the model's responses and ensure that it generates coherent, relevant, and accurate content.
There are four main areas to consider when writing an effective prompt. You don’t need to use all four, but using a few will help!
Persona: Who is the user you're writing for? What are their skills and knowledge?
Task: What specific action do you want the user to perform?
Context: What information does the user need to know to complete the task?
Format: What is the desired output of the task?
Example Prompt:
[Persona] You are a Google Cloud program manager.
[Task] Draft an executive summary email
[Context] to [person description] based on [details about relevant program docs].
[Format] Limit to bullet points.
By using "act as," you are establishing a specific context for the language model and guiding it to understand the type of task or request you are making. This helps to set the right expectations and provides the language model with the necessary context to generate a response tailored to the defined role.
"Act as a creative writing assistant and generate a short story based
+on a prompt about a futuristic world where robots have become sentient."
+
Introduced in Wei et al. (2022), chain-of-thought (CoT) prompting enables complex reasoning capabilities through intermediate reasoning steps. You can combine it with few-shot prompting to get better results on more complex tasks that require reasoning before responding. Prompting Guide with CoT
Yao et al., 2022 introduced a framework named ReAct where LLMs are used to generate both reasoning traces and task-specific actions in an interleaved manner.
Generating reasoning traces allow the model to induce, track, and update action plans, and even handle exceptions. The action step allows to interface with and gather information from external sources such as knowledge bases or environments.
The ReAct framework can allow LLMs to interact with external tools to retrieve additional information that leads to more reliable and factual responses. Prompting Guide with CoT
Summary is a prompt engineering technique that involves providing a summary of a given document or text. It can helps for summarizing changelogs, articles, or other technical documents.
Help me write an article of this document [Insert or copy paste document text]
+Generate 5 titles out of the following topic….
+Generate a subtitle to catch readers’ attention on the following
+topic [describe the topic]
+
Write is a prompt engineering technique that involves providing a step-by-step guide or instructions for a given task or process. Its useful for developers to create functional and technical documentations.
Create a template of an email response to customer inquiring about ….
+Create a guide that explains how to use ….
+Write step by step instructions
+
Code explanation is a prompt engineering technique that involves providing a detailed explanation of a code snippet or function. This technique is useful for developers who want to understand the inner workings of a codebase or for those who want to document their code.
cf. Preformatted prompts for an example of code explanation
Create a function that calculates the factorial of a number.
Handle both positive integers and zero, with error handling for negative inputs.
Expected Output (python)
deffactorial(n):
+if n <0:
+raise ValueError("Input must be a non-negative integer.")
+if n ==0:
+return1
+ result =1
+for i inrange(1, n +1):
+ result *= i
+return result
+
Solutions
Persona: Python Developer Task: Create a function Context: You need to calculate the factorial of a number.
As a Python Developer, create a function named factorial that takes a single integer input and returns its factorial. The function should handle both positive integers and zero. Include error handling for negative inputs.
Persona: JavaScript Developer Task: Write a function to handle API requests Context: You need to fetch data from a given URL.
As a JavaScript Developer, write a function named fetchData that takes a URL as an argument and fetches data from that URL using the Fetch API. The function should return the JSON response and handle any errors that may occur during the fetch operation.
Persona: C# Developer Task: Define a class Context: You are creating a representation of a book.
As a C# Developer, create a class named Book that has properties for Title, Author, and PublicationYear. Include a method named DisplayDetails that prints the book's details in a formatted string.
Persona: Ruby Developer Task: Write a validation method Context: You need to validate email addresses.
As a Ruby Developer, write a method named valid_email? that takes a string as input and returns true if it is a valid email address, and false otherwise. Use a regular expression for validation.
Code completion is a prompt engineering technique that involves providing a list of possible completions for a given code snippet or function. This technique is useful for developers who want to suggest possible code changes or improvements based on their existing code.
Code conversion is a prompt engineering technique that involves providing a conversion of a code snippet or function from one programming language to another. This technique is useful for developers who want to migrate their code from one language to another or for those who want to use a different programming language for their projects.
Code review is a prompt engineering technique that involves providing a code review of a given code snippet or function. This technique is useful for developers who want to review their code for potential issues,bugs, or for those who want to provide feedback on their code.
Code fixing is a prompt engineering technique that involves providing a code fix for a given code snippet or function. This technique is useful for developers who want to fix bugs or issues in their code or for those who want to improve the quality of their code.
Help me find mistakes in my code [insert your code]
+Explain what this snippet of code does [insert code snippet]
+What it the correct syntax for a [statement or function]
+in [programming language]
+How do I fix the following programming language code
+[program language] code which explain the functioning [Insert code snippet]
+
Code refactor is a prompt engineering technique that involves providing a code refactoring of a given code snippet or function within a specific scope. This technique is useful for developers who want to refactor their code within a specific context or for those who want to improve the readability and maintainability of their code.
Mock data generation is a prompt engineering technique that involves providing a mock data set for a given code snippet or function. This technique is useful for developers who want to test their code with mock data or for those who want to generate test data for their projects. It avoid creating manually fake data for testing.
Create prompts that can generate mock user profiles. The language used is JavaScript.
The profile should include:
Name
Age
Email
Address (Street, City, State, Zip Code)
Phone Number
Solutions
Mock Data Generation
As a JavaScript Developer, write a function named generateUserProfile that generates a mock user profile with the following details: name, age, email, address, and phone number. The function should return an object containing the user profile details.
Testing is a prompt engineering technique that involves providing a test case for a given code snippet or function. This technique is useful for developers who want to test their code or for those who want to ensure the correctness of their code.
System design and architecture is a prompt engineering technique that involves providing a system design or architecture for a given code snippet or function. This technique is useful for developers who want to design their code or for those who want to understand the overall architecture of their projects.
Documentation generation is a prompt engineering technique that involves providing a documentation for a given code snippet or function. This technique is useful for developers who want to document their code or for those who want to provide documentation for their projects. It can be used to generate documentation in various formats such as Markdown, HTML, or PDF.
Commit message generation is a prompt engineering technique that involves providing a commit message for a given code snippet or function. This technique is useful for developers who want to generate commit messages for their code or for those who want to ensure that their commit messages are clear and concise.
Vulnerability checking is a prompt engineering technique that involves providing a vulnerability check for a given code snippet or function. This technique is useful for developers who want to check for vulnerabilities in their code or for those who want to ensure that their code is secure.
Warning
This prompt is not recommended for production use. It is intended for testing and debugging purposes only and is not a proof of security or safety of your app.
You can understand complex regular expressions and generate ones that match specific patterns in text. This technique is useful for developers who want to write complex regular expressions or for those who want to understand the syntax of regular expressions.
Explain this regular expression in JavaScript: const regex =
+/^[A-Za-z0–9._%+-]+@[A-Za-z0–9.-]+\\.[A-Za-z]{2,}$/;
+