Description

Given a prompt/question/request, the GPT function returns a response from OpenAI GPT language model.

=GPT("Write a haiku about tacos")

If you want to use the value in another cell as the prompt, you can do so by directly passing it in:

=GPT(A1)

You can also combine a fixed prompt together with the value from another cell using the & operator:

=GPT("Summarize the following paragraph: " & A1)

Prompts can be quite large to guide GPT tone and behavior and achieve consistently optimal results. For instance, the following prompt is perfectly valid:

=GPT("Act as a JSON parser. I will give you a JSON snippet, and you will return the value of the keys I request as a comma separated list. Please return the values of the productName key in this JSON: " & A1)

Syntax

=GPT(prompt, gpt_model, temperature, max_tokens, cache)

The function parameters are as follows:

Parameter

Required

Default

Description

prompt

Yes

String (or cell reference) representing a user prompt

gpt_model

No

gpt-3.5-turbo

String representing the OpenAI GPT model to use. See the list of available models for usage and pricing details.

temperature

No

1

Number between 0 and 1 representing how much variance to introduce when responding. 0 is very little variance and 1 is the most variance.

max_tokens

No

1000

Number representing the max number of tokens to return as the response.

cache

No

true

Whether or not to cache the response. Setting to false will incur a re-execution on every cell refresh.

Caution: The GPT4 model uses 25 times more SheetGPT usage credits than the default GPT3.5 model for only marginally better results in a few narrow use-cases. It is also much slower.

Advanced Options

The GPT function accepts several optional arguments which you can use to further control the response to your prompt. Here are some examples:


Specify a different GPT model


The second argument to =GPT lets you specify a different OpenAI model to use when responding to your prompt. The default is currently "gpt-3.5-turbo" which is the best blend of cost-efficiency and performance. If you have more specific needs you may use one of the other OpenAI models instead:

=GPT("Create a list of three types of animals", "gpt-4")

** Caution: The GPT4 model uses 25 times more SheetGPT usage credits than the default GPT3.5 model. **


Adjust the variety of the response


The third argument to =GPT lets you specify a different "temperature" to use for the response. The temperature is a number between 0 and 1 (so a decimal like 0.5) that defines how much variability you want in your response.

=GPT("Write three titles for an article reviewing the current state of natural language processing AIs", , 0.1)


A higher temperature like 1 (the default) means you are likely to get very different responses to the same prompt - there is more "drift" in the responses. Use a lower number like 0 or 0.1 if you want more determinism in your responses.


Limit the length of the response


The fourth argument to =GPT lets you specify a response length limit (in terms of tokens, which roughly equals half an average word in length). So if you want to limit the length of your response to about 15 words you could pass in 25 as the fourth argument to GPT:

=GPT("Write a title for an article reviewing the current state of natural language processing AIs",,,25)

Performance & caching

The completion that is returned is cached by SheetGPT to ensure that cell refreshes and other Sheet actions do not cause an unnecessary burn on your tokens.

Any GPT request with the same arguments in a Sheet will return the cached value, indefinitely. This is well beyond the typical 6-hour cache limit of most Sheet plugins and is our attempt to make SheetGPT the most cost-effective way to utilize various GPT functionality.

If you ever need to avoid hitting the cache and force the generation of a new response, you can set the cache argument to false to force re-generation:

=GPT("What is the most appealing European city?",,,,false)

See Also

You may also find the following resources useful: