Python module

You can use the ApiBackend class to interact directly with the LLMs.

Create an instance of the class and use the ask method to send a message to the LLM and receive the response. For example:

from lwe import ApiBackend

bot = ApiBackend()
success, response, message = bot.ask("Say hello!")
if success:
    print(response)
else:
    raise RuntimeError(message)

The ask method takes an argument representing the messages to send to the LLM, in one of the following forms:

  1. A string, which sends a single user message

  2. A list of message dicts. Each message should have two keys: role and content, e.g.:

    [
        {'role': 'system', 'content': 'You are a helpful assistant.'},
        {'role': 'user', 'content': 'Say hello!'},
        {'role': 'assistant', 'content': 'Hello!'},
    ]
    

It returns a tuple with the following values:

  1. success: Boolean indicating whether the operation succeeded.

  2. response: An object representing the response received (usually just a string response from the LLM)

  3. message: User message describing the outcome of the operation.

To pass custom configuration to the ApiBackend, use the Config class:

from lwe import ApiBackend
from lwe.core.config import Config

config = Config()
config.set('debug.log.enabled', True)
bot = ApiBackend(config)
success, response, message = bot.ask("Say hello!")
if success:
    print(response)
else:
    raise RuntimeError(message)

To stream a response:

  1. Define a callback function to receive streaming chunks

  2. Define a request_overrides dict, passing the defined callback in the stream_callback key

  3. Pass request_overrides as an argument to the ask_stream method

from lwe import ApiBackend

def stream_callback(content):
    print(content, end='', flush=True)

bot = ApiBackend()
request_overrides = {
    'stream_callback': stream_callback
}
success, response, message = bot.ask_stream("Say three words about earth", request_overrides=request_overrides)

GPT-4

To use GPT-4 within your Python code, you must use Presets.

The code below uses the system-defined gpt-4-chatbot-responses preset:

from lwe import ApiBackend
from lwe.core.config import Config

config = Config()
config.set('model.default_preset', 'gpt-4-chatbot-responses')
bot = ApiBackend(config)
success, response, message = bot.ask("Hello, world!")

Advanced Python module usage

The ApiBackend class has full access to most of the features available in the LWE shell:

  • Templates

  • Presets

  • Workflows

  • OpenAI functions

  • etc…

If you’re a moderately skilled Python programmer, you should be able to figure out how to make use of these features using the ApiBackend class by looking at the core shell module and API shell module code, or examining the documentation for ApiBackend.