Why Is the API Design of ChatGPT Revolutionary?
Recently, I dived into the ChatGPT API, and I was surprised by its simplicity and powerfulness.
In a conventional API design, we need pass all the parameters required by the API, in order to ask the service behind to work as we expected. It's like a math function: we need pass all variables, then we can get an answer caclulated by the function.
For example, this is the design of previous version of ChatGPT API. In this version, when we send the request, we need to specify a number of parameters, such as model
, temperature
parameters, etc. If we are not an expert in this area, we would stick with the default settings. If we need fine tune the result, we may need to chage the input of the parameters in the request.
model_param = (
{"model": fine_tuned_qa_model}
if ":" in fine_tuned_qa_model
and fine_tuned_qa_model.split(":")[1].startswith("ft")
else {"engine": fine_tuned_qa_model}
)
response = openai.Completion.create(
prompt=f"Answer the question based on the context below\n\nText: {context}\n\n---\n\nQuestion: {question}\nAnswer:",
temperature=0,
max_tokens=max_tokens,
top_p=1,
frequency_penalty=0,
presence_penalty=0,
stop=stop_sequence,
**model_param,
)
While the recent update of ChatGPT API simplified the design. The following example is calling another endpoint of ChatGPT API, and the messages are the prompts. In this example, we can see, apart from model
and messages
, we don't need pass other parameters. This design significantly simplifies the request.
openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Who won the world series in 2020?"},
{"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."},
{"role": "user", "content": "Where was it played?"}
]
)
With this simplified design, can we achieve same or better results? The answer is yes. With the improvement of the GPT model behind the scene, we can use the prompt to fine tune the request. This not only add simplicity of the fine-tuning, but also add flexibility – infinitive flexibility.
Why infinitive? With traditional API design, user can only submit designed parameters in the request, while in the recent ChatGPT API, we can use natural language to tell what we would like to achieve in the prompts to tell the model. Since the GPT models behind the API are powerful enough to understand prompts, and even infer from prompts, we can tell whatever in our mind to the model. Yes, with the 'superpower' of the LLM models (large language models), the API design is also greatly simplified, and the usefulness is magnificently improved. For example, if we need change the writing style of the output of the LLM, we can just specify the style in the prompts as following:
{"role": "user", "content": "Can you write a letter in a formal writing style?"}
{"role": "user", "content": "Can you write a letter in a casual writing style?"}
For example, recently we were told to build a question and answer dataset with given text. Before LLM, we would take months to train a machine learning model, and apply to the given dataset to generate the Q&A dataset. With the power of ChatGPT API, we just need 30 lines of code to accomplish this task. Yes, we spent most of the time to figure out how to use the prompt properly to fine tune the result. The following is a sample code of this task.
import pandas as pd
import openai
import mistletoe
import os
openai.api_key = os.getenv("OPENAI_API_KEY")
MODEL = "gpt-3.5-turbo"
def get_questions(context, model, num_question):
response = openai.ChatCompletion.create(
model=model,
messages=[
{"role": "user", "content": "Provide " + str(num_question) + " questions based on the input: " + context},
])
return response
def parse_questions(result):
candidates = result['choices'][0]['message']['content'].split('\n')
return [i for i in candidates if i != '']
def get_answers(context, question_list, model):
questions = '\nQ'.join(question_list)
response = openai.ChatCompletion.create(
model=model,
messages=[
{"role": "assistant", "content": "context: " + context},
{"role": "user", "content": "Provide the answers based on the context and questions: " + questions},
])
return response
def pipeline_qa(context, model=MODEL, num_question=5):
questions = parse_questions(get_questions(context, model, num_question))
result_prayer = parse_questions(get_answers(context, questions, model))
return context, pd.DataFrame(list(zip(questions, result_prayer)), columns=['Questions', 'Answers'])
Key Takeaways
- UX of ChatGPT API is superb. With the power of GPT models, the interface of the API is super simple, but the power of the API is infinitive.
- With the powerful LLM models, we need reconsider how to develop an NLP task, do we need train a model from scratch, or we can leverage the power of the LLM to accelerate the development?
- Also, we may redefine engineering development and model development. If we can leverage the power of LLMs, the entire development and serving pipeline of NLP tasks may also be changed. Instead of serving in-house models, we can also consider to fine-tuning ChatGPT-like models. If you want to know more, please stay tuned.
Comments ()