Infinite GPT

So you want longer resposnes from GPT? So at the moment

I also alias’d the script to gpt so I can just run gpt, hit enter, in the terminal and it will run the script and ask for a prompt. For longer inputs you can put them in input.txt and it will use that instead.

So this will take whatever text is in input.txt and run it through the OpenAI API and then save the output to output.txt.

import openai
from concurrent.futures import ThreadPoolExecutor

openai.api_key = "sk-X"
INPUT_FILE = "LOCATION/infiniteGPT/input.txt"
OUTPUT_FILE = "LOCATION/infiniteGPT/output.txt"
MAX_TOKENS = 2000

def load_text(file_path):
    """Loads the text from the given file path.

    Args:
        file_path (str): The path of the file to load.

    Returns:
        str: The contents of the file.
    """
    with open(file_path, 'r') as file:
        return file.read()


def save_to_file(responses, output_file):
    """Saves the given responses to the given output file.

    Args:
        responses (list[str]): A list of strings to save to the file.
        output_file (str): The path of the file to save the responses to.
    """
    with open(output_file, 'w') as file:
        for response in responses:
            file.write(response + '\n')

def call_openai_api(chunk):
    """Calls the OpenAI API with the given chunk of text.

    Args:
        chunk (str): The chunk of text to send to the API.

    Returns:
        str: The response from the API.
    """
    response = openai.Completion.create(
        engine="text-davinci-003",
        prompt=chunk,
        max_tokens=MAX_TOKENS,
        n=1,
        stop=None,
        temperature=0.5,
    )
    return response.choices[0].text.strip()

def split_into_chunks(text, tokens=800):
    """Splits the given text into chunks of the given number of tokens.

    Args:
        text (str): The text to split.
        tokens (int, optional): The number of tokens per chunk. Defaults to 600.

    Returns:
        list[str]: A list of chunks of the given text.
    """
    words = text.split()
    chunks = [' '.join(words[i:i + tokens]) for i in range(0, len(words), tokens)]
    return chunks

def process_chunks(input_file=INPUT_FILE):
    """Processes the chunks of text from the given file.

    Args:
        input_file (str, optional): The path of the file to process. Defaults to INPUT_FILE.
    """
    if input_file != INPUT_FILE and input_file is not None:
        text = input_file
    else:
        text = load_text(input_file)
        print("Using \u001b[31m" + "input.txt" + "\u001b[0m")

    chunks = split_into_chunks(text)
    with ThreadPoolExecutor() as executor:
        responses = list(executor.map(call_openai_api, chunks))
        save_to_file(responses, OUTPUT_FILE)

def output_file_contents(filename):
    """Prints the contents of the given file.

    Args:
        filename (str): The path of the file to print.
    """
    with open(filename, 'r') as file:
        for line in file:
            print(line, end='')

if __name__ == "__main__":
    text = input("Prompt: ")

    if not text:
        process_chunks()
    else:
        process_chunks(text)

    print("\n")
    print("\u001b[35m" + "=" * 80 + "\u001b[0m")
    print("\n")
    output_file_contents(OUTPUT_FILE)
    print("\n")
    print("\u001b[35m" + "=" * 80 + "\u001b[0m")
    print("File:", OUTPUT_FILE)

The command alias I use is:

alias gpt='python3 LOCATION/infiniteGPT/infiniteGPT/blastoff.py; terminal-notifier -appIcon https://brew.sh/assets/img/homebrew-256x256.png -title "GPT" -message "Response Complete"; afplay LOCATION/sounds/TR808WAV/MC/MC10.WAV;'

Terminal Notifier is available here: https://github.com/julienXX/terminal-notifier.

afplay LOCATION/sounds/TR808WAV/MC/MC10.WAV; plays a sound when the script is done. I use this to know when it’s done so I can switch back to the terminal and read the output.

Having GPT in the command line is such a nice efficiency booster.. ^^

Updated on June 5, 2023.

ender