Connecting to Other Models
Connect to other models where you have accounts
- This is only available on the Premium Plan.
- You need to have your own account (paid or otherwise) with the LLM service/model provider.
To access the Models page, click on the Models
link in the left navigation of the Extension Settings. Or click here.
What Does the Model Do?
- The model is the LLM service that Steve uses to help you accomplish all your tasks.
- During your QuickStart trial Ask Steve uses Google's Gemini 1.5 Flash model.
- After that, you can use the same model with your own API Key, or Mistral's free La Plateforme model.
- All models have different characteristics and are better or worse at various tasks.
- Generally the better the model, the slower and more expensive it is.
- When you use the free Gemini or Mistral models, those companies reserves the right to use your data for training purposes and to make their products better.
- If your data is sensitive or you are concerned about privacy, you should sign up for a Premium Ask Steve account and then connect Ask Steve to an account at a provider that doesn't use your data.
- Switching out Steve's model is like switching out his brain.
QuickStart Model
- The QuickStart plan uses Google's Gemini 1.5 Flash model.
- When you are on the QuickStart plan, your requests are sent to the Ask Steve server, which sends them along to Google, and then sends the results back. Your requests go through our server, but we never save anything.
Add a New Model
- To add a new model, press the
ADD NEW MODEL
button. A dialog will open with a number of model templates that you can select from, and a place for you to enter your API Key for that model if you have it already. PressOK
and the interface will be filled in with the template details. - Edit any of the details that you need to
- To test the model, scroll down to the
Test Your Connection
area and press theTEST
button. A sample query will be sent and the response shown below. - Once you are satisfied that everything is working correctly, press the
SAVE NEW MODEL
button. Steve will switch to using this new model, as indicated in the dropdown list at the top of the page.
All requests are made via POST. The data fields for a model are as follows:
- Name: So that you know what model this is. This is shown to users in error messages.
- URL: The URL endpoint for this model
- Context Window: The size of the context window for this model. This can usually be found on their documentation pages. If in doubt, put in 32000
- Header: The JSON header that is sent in the POST request. This should be formatted according to the service provider's documentation. This typically includes the API Key or other authentication for the service.
- Body: The JSON body that is sent with the POST request. This should be formatted according to the service provider's documentation.
- Response Path: In bracket notation, the path to the field in the response object that contains the actual response. This will be in the service provider's documentation. Note that one-shot responses and streaming responses usually have different paths, even from the same service provider.
- Error Path: In bracket notation, the path to the field in the response object that contains any error messages. This will be in the service provider's documentation.
- This is a Streaming API: If you are calling a streaming API, this needs to be checked, as streaming responses are parsed differently than non-streaming responses.
Edit a Model
- Select the model that you want to edit from the
Steve is using:
dropdown. - Make your edits
- Press the
UPDATE
button
NOTE: This also changes the current model. Change it back when you're done editing via theSteve is using:
dropdown.
Change the Current Model
- Select the model that you want to change to from the
Steve is using:
dropdown. - This is automatically saved, so now Steve will use this model.
Delete a Model
- Select the model that you want to delete from the
Steve is using:
dropdown. - Press the
DELETE
button and confirm deletion - Select whatever model you want Steve to use now
Pro-Tips
Call Any API
- If you want to call an API or Webhook from a Skill, see the details here
Change the Temperature
- The
temperature
setting of a model usually ranges from 0.0 to 1.0 or 2.0, where 0 tells the model to provide more factual answers, and the upper limit tells the model to provide more creative responses. The value can be set to a value somewhere in that range (e.g. 0.3) - The default model and all our templates use a temperature of 0, as most use cases will benefit from more factual responses.
- When adding a new model, you can set its temperature to something else, depending on how you're going to use it.
- You can even add the same model multiple times, each with a different temperature, and then assign Skills that need more creativity to higher temperature models.
Connect to a Model Running On Your Own Machine
- See details on how to set this up here