Running a simple prompt
This section assumes you completed the prior sections and builds upon the same code. By now, you should have instantiated Semantic Kernel and loaded both the GPT-3.5 and the GPT-4 services into it in that order. When you submit a prompt, it will default to the first service, and will run the prompt on GPT-3.5.
When we send the prompt to the service, we will also send a parameter called temperature. The temperature parameter goes from 0.0 to 1.0, and it controls how random the responses are. We’re going to explain the temperature parameter in more detail in later chapters. A temperature parameter of 0.8 generates a more creative response, while a temperature parameter of 0.2 generates a more precise response.
To send the prompt to the service, we will use a method called create_semantic_function. For now, don’t worry about what a semantic function is. We’re going to explain it in the Using generative AI to solve simple problems section...