Skip to content

fkostadinov/pygape

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

What is Pygape?

Pygape is a Python library that implements some of the fundamental ideas of Generative AI-Augmented Program Execution (GAPE). At the moment this is only a sample implementation relying on OpenAI's LLM models.

How to use

  1. Create a .env file in the project's root directory.
  2. In the .env file, add your OpenAI API key:
OPENAI_API_KEY="...your key goes here..."
  1. To see pygape in action run the test cases implemented in test/test_pygape.py:
python -m unittest test.test_pygape.PyGapeTestCase -v

What is Generative AI-Augmented Program Execution (GAPE)?

GAPE is a programming paradigm that mixes formally well-defined program execution with "common sense reasoning" that Generative AI is capable of. This is best illustrated with an example. Imagine a person makes a statement: "Planet earth is a flat disk residing upon the back of a giant turtle." This statement can be either true or false from a scientific perspective. But which one is it? Applying some "common sense reasoning" (we won't go into discussions here what exactly this means) we conclude that the statement is actually false.

Until the advent of Generative AI applying "common sense reasoning" to real world problems was largely restricted to the intelligence of human beings. No longer! Today, we can leave the common sense reasoning to a Large Language Model. Consider this:

statement = "Planet earth is a flat disk residing upon the back of a giant turtle."

# Now send the statement to an LLM, apply some common sense reasoning, and return either True or False
is_true = apply_common_sense_reasoning_with_an_llm(statement)
if is_true:
    print("Welcome to the Flat Earth Society!")
else:
    print("Welcome to the scientific community!")

Notice how we are hiding the entire complexity of calling the Generative AI and reasoning behind a single function call. Actually, it could equally be a human being who provides the input rather than an LLM. In fact, we might not even care too much who provides the answer, as long as we do receive an answer that we consider trustworthy enough to continue our program execution.

This approach can be used in many ways. Imagine having a list of concepts (represented simply as strings) that you want to filter by some criterion. For example a list of animal names, and you want to remove all animals that are not herbivores. And that's just the start. Besides filtering we could also be sorting a list according to some criterion. We could try to find an element in a list by some criterion. We could invent new meanings for map and reduce functions. And so on, there are too many possibilities to list them all.

Welcome to the world of Generative AI-Augmented Program Execution.

For more info visit https://fabian-kostadinov.github.io/2023/11/09/genai-augmented-program-execution/.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages