Model: Default (GPT-3.5)
Avatar of the person chatting

# Example of an OpenAI ChatCompletion request with stream=True # https://platform.openai.com/docs/guides/chat # record the time before the request is sent start_time = time.time() # send a ChatCompletion request to count to 100 response = openai.ChatCompletion.create( model='gpt-3.5-turbo', messages=[ {'role': 'user', 'content': 'Count to 100, with a comma between each number and no newlines. E.g., 1, 2, 3, ...'} ], temperature=0, stream=True # again, we set stream=True ) # create variables to collect the stream of chunks collected_chunks = [] collected_messages = [] # iterate through the stream of events for chunk in response: chunk_time = time.time() - start_time # calculate the time delay of the chunk collected_chunks.append(chunk) # save the event response chunk_message = chunk['choices'][0]['delta'] # extract the message collected_messages.append(chunk_message) # save the message print(f"Message received {chunk_time:.2f} seconds after request: {chunk_message}") # print the delay and text # print the time delay and text received print(f"Full response received {chunk_time:.2f} seconds after request") full_reply_content = ''.join([m.get('content', '') for m in collected_messages]) print(f"Full conversation received: {full_reply_content}")

This code example demonstrates how to send a ChatCompletion request to OpenAI's API, asking the GPT-3.5-turbo model to count from 1 to 100 with a comma between each number and no newline characters. The stream=True option is set to receive the output in a stream of chunks.

The start_time variable records the current time before sending the request. The request is then sent using openai.ChatCompletion.create() with the provided model, messages, temperature, and stream options.

Two lists, collected_chunks and collected_messages, are created to store the stream of chunks and messages, respectively. The code iterates through the stream of events, calculating the time delay for each chunk, appending it to collected_chunks, and extracting the message from the chunk using chunk['choices'][0]['delta']. The message is then appended to collected_messages and printed along with the time delay.

Finally, the total time delay and the full conversation received are printed after combining the content of all collected messages.

Avatar of the person chatting

can you make a same implemetation with Objective-C

Certainly! Here's an implementation of the same OpenAI ChatCompletion request in Objective-C:

scssCopy code
#import "AFNetworking.h" // record the time before the request is sent NSDate *start = [NSDate date]; // set the API endpoint URL and parameters NSString *endpoint = @"https://api.openai.com/v1/engines/gpt-3.5-turbo/completions"; NSDictionary *params = @{ @"prompt": @"Count to 100, with a comma between each number and no newlines. E.g., 1, 2, 3, ...", @"temperature": @0, @"max_tokens": @10, @"n": @1, @"stream": @YES }; // set the OpenAI API key NSString *apiKey = @"YOUR_API_KEY_HERE"; // create the HTTP request headers NSDictionary *headers = @{ @"Authorization": [NSString stringWithFormat:@"Bearer %@", apiKey], @"Content-Type": @"application/json" }; // create the HTTP request object using AFNetworking AFHTTPSessionManager *manager = [AFHTTPSessionManager manager]; manager.requestSerializer = [AFJSONRequestSerializer serializer]; manager.responseSerializer = [AFJSONResponseSerializer serializer]; [manager.requestSerializer setValue:@"application/json" forHTTPHeaderField:@"Content-Type"]; [manager.requestSerializer setValue:[NSString stringWithFormat:@"Bearer %@", apiKey] forHTTPHeaderField:@"Authorization"]; // send the HTTP request NSURLSessionDataTask *task = [manager POST:endpoint parameters:params headers:headers progress:nil success:^(NSURLSessionDataTask * _Nonnull task, id _Nullable responseObject) { // create variables to collect the stream of chunks NSMutableArray *collectedChunks = [NSMutableArray array]; NSMutableArray *collectedMessages = [NSMutableArray array]; NSDate *now; double delay; // iterate through the stream of events for (NSDictionary *chunk in responseObject) { // calculate the time delay of the chunk now = [NSDate date]; delay = [now timeIntervalSinceDate:start]; // save the event response [collectedChunks addObject:chunk]; // extract the message NSDictionary *choice = chunk[@"choices"][0]; NSString *message = choice[@"text"]; // save the message [collectedMessages addObject:message]; // print the delay and text NSLog(@"Message received %.2f seconds after request: %@", delay, message); } // print the time delay and text received now = [NSDate date]; delay = [now timeIntervalSinceDate:start]; NSLog(@"Full response received %.2f seconds after request", delay); NSString *fullReplyContent = [collectedMessages componentsJoinedByString:@""]; NSLog(@"Full conversation received: %@", fullReplyContent); } failure:^(NSURLSessionDataTask * _Nullable task, NSError * _Nonnull error) { NSLog(@"Error: %@", error); }]; // resume the HTTP request [task resume];

This implementation uses the AFNetworking library to send an HTTP POST request to the OpenAI API endpoint for ChatCompletion. The params dictionary contains the request parameters, and the headers dictionary contains the HTTP headers, including the API key.

The request is sent using the AFHTTPSessionManager class, which is configured to use JSON request and response serializers. The POST method returns an NSURLSessionDataTask object, which is used to handle the response.

The code then creates two NSMutableArray objects, collectedChunks and collectedMessages, to store the stream of chunks and messages, respectively. The response is iterated through using a for loop, and each chunk is processed to extract the message and calculate the time delay. The