An AI-powered blog

Category: Programming with ChatGPT

Deriving a JSON-schema from JSON-content

In the world of software development and data management, JSON (JavaScript Object Notation) has become a popular choice for structuring and exchanging data due to its simplicity and flexibility. JSON provides a convenient way to represent complex data structures that can be easily understood and processed by both humans and machines. However, working with large JSON datasets can sometimes pose challenges, especially when it comes to ensuring data integrity and validation. That’s where GPT (Generative Pre-trained Transformer) comes into play, offering a powerful solution to derive a JSON-schema from JSON-content.

Understanding JSON-Schema

Before delving into the magic of GPT, let’s briefly discuss JSON-Schema. JSON-Schema is a vocabulary that allows you to annotate and validate JSON documents. It provides a formal definition of the expected structure, data types, and constraints for a given JSON dataset. By defining a JSON-Schema, you can ensure that the data adheres to specific rules and requirements, enabling robust validation and efficient data processing.

Leveraging GPT for JSON-Schema Derivation

GPT, powered by its advanced natural language processing capabilities, can be utilized to derive a JSON-Schema from existing JSON content. No training is necessary as GPT (in this case version 3.5) comes with everything you need. All you need is an OpenAI Api Access and a suitable prompt like the following, giving GPT the instruction to behave like a python function with input and return value:

"role": "system", "content": "You are a python function to analyze JSON string input to generate a JSON schema from it as a JSON string result"
"role": "user", "content": "Extract the JSON schema of the following JSON content and return it as a JSON string. Treat NULL-Values as optional: {JSON CONTENT}"

In our case we are using the follwing json example file to determine the appropriate schema:

{
"articles": [
  {
	"url": "https://www.britannica.com:443/technology/artificial-intelligence",
	"url_mobile": null,
	"title": "Artificial intelligence ( AI ) | Definition , Examples , Types , Applications , Companies , & Facts",
	"seendate": "20230612T054500Z",
	"socialimage": "https://cdn.britannica.com/81/191581-050-8C0A8CD3/Alan-Turing.jpg",
	"domain": "britannica.com",
	"language": "English",
	"sourcecountry": "United States",
	"id": 1
  },
  {
	"url": "https://www.drive.com.au/news/jeep-says-artificial-intelligence-is-heading-off-road/",
	"url_mobile": null,
	"title": "Jeep says Artificial Intelligence is heading off - road",
	"seendate": "20230611T203000Z",
	"socialimage": "https://images.drive.com.au/driveau/image/upload/c_fill,f_auto,g_auto,h_675,q_auto:good,w_1200/cms/uploads/mm6gqcquwz10snybg79k",
	"domain": "drive.com.au",
	"language": "English",
	"sourcecountry": "Australia",
	"id": 2
  },
  {
	"url": "https://labusinessjournal.com/featured/artificial-intelligence-on-call/",
	"url_mobile": null,
	"title": "Artificial intelligence on Call - Los Angeles Business Journal",
	"seendate": "20230612T080000Z",
	"socialimage": "https://images.labusinessjournal.com/wp-content/uploads/2023/06/SR_COVEROPTION_Pearl-Second-Opinion-dentist-with-patietn-point-at-caries-HiRes-copy.jpg",
	"domain": "labusinessjournal.com",
	"language": "English",
	"sourcecountry": "United States",
	"id": 3
  },
  {
	"url": "https://www.emirates247.com/uae/artificial-intelligence-office-organises-ai-enabled-entrepreneurs-conference-in-collaboration-with-nvidia-2023-06-11-1.712994",
	"url_mobile": "https://www.emirates247.com/uae/artificial-intelligence-office-organises-ai-enabled-entrepreneurs-conference-in-collaboration-with-nvidia-2023-06-11-1.712994?ot=ot.AMPPageLayout",
	"title": "Artificial Intelligence Office organises AI - Enabled Entrepreneurs conference in collaboration with NVIDIA",
	"seendate": "20230611T194500Z",
	"socialimage": "https://www.emirates247.com/polopoly_fs/1.712995.1686498090!/image/image.jpg",
	"domain": "emirates247.com",
	"language": "English",
	"sourcecountry": "United Arab Emirates",
	"id": 4
  },
  {
	"url": "https://www.business-standard.com/india-news/ai-offers-new-opportunities-also-brings-threats-to-privacy-amitabh-kant-123061200521_1.html",
	"url_mobile": "https://www.business-standard.com/amp/india-news/ai-offers-new-opportunities-also-brings-threats-to-privacy-amitabh-kant-123061200521_1.html",
	"title": "AI offers new opportunities , also brings threats to privacy : Amitabh Kant",
	"seendate": "20230612T120000Z",
	"socialimage": "https://bsmedia.business-standard.com/_media/bs/img/article/2023-02/14/full/1676346078-2487.jpg",
	"domain": "business-standard.com",
	"language": "English",
	"sourcecountry": "India",
	"id": 5
  },
  {
	"url": "https://www.jamaicaobserver.com/latest-news/tufton-says-pnp-using-ai-to-spread-misinformation-threaten-democracy/",
	"url_mobile": "https://www.jamaicaobserver.com/latest-news/tufton-says-pnp-using-ai-to-spread-misinformation-threaten-democracy/amp/",
	"title": "Tufton says PNP using AI to spread misinformation , threaten democracy",
	"seendate": "20230612T010000Z",
	"socialimage": "https://imengine.public.prod.jam.navigacloud.com/?uuid=49085626-2689-402B-BA2C-7224FE707AF4&function=fit&type=preview",
	"domain": "jamaicaobserver.com",
	"language": "English",
	"sourcecountry": "Jamaica",
	"id": 6
  },
  {
	"url": "https://www.foxnews.com/world/chatgpt-delivers-sermon-packed-german-church-tells-congregants-fear-death",
	"url_mobile": "https://www.foxnews.com/world/chatgpt-delivers-sermon-packed-german-church-tells-congregants-fear-death.amp",
	"title": "ChatGPT delivers sermon to packed German church , tells congregants not to fear death",
	"seendate": "20230611T151500Z",
	"socialimage": "https://static.foxnews.com/foxnews.com/content/uploads/2023/06/Fuerth3.jpg",
	"domain": "foxnews.com",
	"language": "English",
	"sourcecountry": "United States",
	"id": 7
  },
  {
	"url": "https://www.standardmedia.co.ke/opinion/article/2001474967/lets-embrace-ai-for-better-efficient-future-of-work",
	"url_mobile": "https://www.standardmedia.co.ke/amp/opinion/article/2001474967/lets-embrace-ai-for-better-efficient-future-of-work",
	"title": "Let embrace AI for better , efficient future of work",
	"seendate": "20230611T143000Z",
	"socialimage": "https://cdn.standardmedia.co.ke/images/articles/thumbnails/fnIyghHfNlwyrVSTk7HEFSkG6Mb3IkndZiu2Yg6v.jpg",
	"domain": "standardmedia.co.ke",
	"language": "English",
	"sourcecountry": "Kenya",
	"id": 8
  },
  {
	"url": "https://techxplore.com/news/2023-06-chatbot-good-sermon-hundreds-church.html",
	"url_mobile": "https://techxplore.com/news/2023-06-chatbot-good-sermon-hundreds-church.amp",
	"title": "Can a chatbot preach a good sermon ? Hundreds attend church service generated by ChatGPT to find out",
	"seendate": "20230611T140000Z",
	"socialimage": "https://scx2.b-cdn.net/gfx/news/hires/2023/can-a-chatbot-preach-a.jpg",
	"domain": "techxplore.com",
	"language": "English",
	"sourcecountry": "United States",
	"id": 9
  },
  {
	"url": "https://technology.inquirer.net/125039/openai-ceo-asks-china-to-help-create-ai-rules",
	"url_mobile": "https://technology.inquirer.net/125039/openai-ceo-asks-china-to-help-create-ai-rules/amp",
	"title": "OpenAI CEO Asks China For AI Rule Making | Inquirer Technology",
	"seendate": "20230612T074500Z",
	"socialimage": "https://technology.inquirer.net/files/2023/06/pexels-andrew-neel-15863000-620x349.jpg",
	"domain": "technology.inquirer.net",
	"language": "English",
	"sourcecountry": "Philippines",
	"id": 10
  }
]
}

In the next steps we implement some short python scripts to read the json file and make an OpenAI api request. First, we are using the following wrapper for the api call:

import os
import openai

class OpenAIWrapper:
    def __init__(self, temperature):
        self.key = os.environ["OPENAI_API_KEY"]
        self.chat_model_id = "gpt-3.5-turbo-0613"
        self.temperature = temperature
        self.max_tokens = 2048
        self.top_p = 1
        self.time_out = 7
    
    def run(self, prompt):
        return self._post_request_chat(prompt)

    def _post_request_chat(self, messages):
        try:
            openai.api_key = self.key
            response = openai.ChatCompletion.create(
                model=self.chat_model_id,
                messages=messages,
                temperature=self.temperature,
                max_tokens=self.max_tokens,
                frequency_penalty=0,
                presence_penalty=0
            )
            res = response['choices'][0]['message']['content']
            return res, True
        except Exception as e:
            return "", False

One thing you have care about is the maximum size “max_tokens” of your input. Currently the GPT 3.5 model can handle up to 16k tokens – our example consists of about 2k tokens. So you may need to abbreviate the input to fit the maximum size.

The second step is to encapsulate our prompts into a python function with a given json_content parameter returning the output of the GPT call:

from OpenAIWrapper import OpenAIWrapper

JSON_SCHEMA_LLM_PREFIX = "Extract the JSON schema of the following JSON content and return it as a JSON string. Treat NULL-Values as optional:"

JSON_SCHEMA_LLM_SUFFIX = "Remember: The reply must only be JSON."

class JsonSchemaLLM:
    def __init__(self, temperature) -> None:
        self.LLM = OpenAIWrapper(temperature)
        self.messages = [{"role": "system", "content": "You are a python function to analyze JSON string input to generate a JSON schema from it as a JSON string result"}]

    def get_json_schema(self, json_content):
        '''
        - input: the json content to dervive a json schema
        - output: json schema
        '''
        messages = self.messages + [{'role': 'user', "content": JSON_SCHEMA_LLM_PREFIX+'\n'+json_content+'\n'+JSON_SCHEMA_LLM_SUFFIX}]
        response, status = self.LLM.run(messages)
        if status:
            return response
        else:
            return "OpenAI API error."

Please note that we are using a suffix with a reminder for GPT. This is especially important for larger prompts to ensure that the output is json only without any descriptions.

Finally we can read the example json file and test our code to see if it works:

from JsonSchemaLLM import JsonSchemaLLM

model = JsonSchemaLLM(0)

with open("JsonExample.json", "r") as f:
    code = f.read()

response = model.get_json_schema(code)

print(response)

This is the output of our new python ai function:

{
  "$schema": "http://json-schema.org/draft-07/schema#",
  "type": "object",
  "properties": {
    "articles": {
      "type": "array",
      "items": {
        "type": "object",
        "properties": {
          "url": {
            "type": "string"
          },
          "url_mobile": {
            "type": ["string", "null"]
          },
          "title": {
            "type": "string"
          },
          "seendate": {
            "type": "string",
            "format": "date-time"
          },
          "socialimage": {
            "type": "string"
          },
          "domain": {
            "type": "string"
          },
          "language": {
            "type": "string"
          },
          "sourcecountry": {
            "type": "string"
          },
          "id": {
            "type": "integer"
          }
        },
        "required": [
          "url",
          "title",
          "seendate",
          "socialimage",
          "domain",
          "language",
          "sourcecountry",
          "id"
        ],
        "additionalProperties": false
      }
    }
  },
  "required": ["articles"],
  "additionalProperties": false
}

It is important to note that while GPT can provide a valuable starting point for JSON-Schema derivation, human intervention and validation are still crucial. The derived schema should be reviewed and refined by domain experts to ensure its accuracy and relevance to the specific context and business requirements.

Conclusion

The use of GPT for deriving a JSON-Schema from JSON content offers immense potential in the realm of data management and software development. It simplifies the process of schema creation, promotes data integrity, and enables deeper insights into data structures. As technology continues to evolve, harnessing the power of GPT becomes increasingly valuable in handling complex data challenges and driving innovation in the digital landscape.

Automation of the “Random blog” with the OpenAI API

In the article “AI blogs independently about random topics: The rise of ChatGPT“, we showed how to generate blog posts with random topics using ChatGPT. However, to actually publish the post, a lot of copy-pasting and editing is still required. The good news is that with the OpenAI API (and in our case, the WordPress REST API), much of this can be automated and better structured.

An overview and documentation of the OpenAI API can be found here. To use the API, you also need an account with OpenAI. The API is generally paid and is billed based on tokens- but you can currently use a trial version to test it. OpenAI provides you with $18 for the test period for free. If you want to know what a prompt or response corresponds to in tokens, you can test that with the tokenizer tool.

Approach:

The workflow in our Python script looks something like this:

1. Generate a random topic for the post

This involves generating the random topic along with metadata using a suitable prompt. To ensure that the output is structured and repeatable for repeated requests, the output of ChatGPT should be in JSON format. We need the following information:

  • Post title
  • Focus keyphrase
  • SEO title
  • Meta description
  • Image prompt for the post image

Additionally, we want to generate posts in English and German, so the above-mentioned content must be generated in both languages.

Finally, we need to write the JSON content to a file so that we can access it later. We will also add the number of tokens used for the request and reply to the file name.

2. Generate the content of the post

After the topic has been chosen by ChatGPT, we will generate a suitable prompt to generate the English-language content. Some readability criteria should be provided:

  • Writing style
  • Text length
  • Paragraph length
  • Maximum number of words per paragraph
  • The focus keyphrase should be included in the introductory sentence

In addition, ChatGPT should insert HTML tags for subheadings and paragraphs.

This response from ChatGPT should also be written to a file along with the tokens used.

3. Translate the generated content

Once the English-language content has been generated, it must be translated into German by ChatGPT.

As usual, the translated text will be stored in a file along with the tokens used.

4. Generate the drafts with the WordPress REST API

To save ourselves a lot of work, we want to send the generated content directly to our blog if possible. This is possible with the WordPress REST API. You only need a user configured for this with at least author rights and a corresponding application password:

After that, you can send HTTP requests to your blog to read content or create posts. Here is the API documentation. But you can also let ChatGPT or Bing Chat generate the Python code for the requests – like I did.

5. Create a topic blacklist

It can happen that ChatGPT repeatedly generates content for already generated topics. For this reason, we want to create a blacklist of already generated topics that we can consider directly in step 1.

Preparing the prompts

In the first step, we want to implement some helper functions to generate the prompts.

import openai
import os
import json
import requests

openai.api_key = "<YOUR API KEY>"
openai.organization = "<YOUR ORGANISATION>"

def generate_text(prompt, system_hint):
    response = openai.ChatCompletion.create(
        model="gpt-3.5-turbo",
        messages=[
            {"role": "system", "content": system_hint},
            {"role": "user", "content": prompt},
        ]
    )

    return response

def get_topic_prompt(topic=None, blacklist=None):
    prompt = "Generate a "
    
    if topic is None:
         prompt += "random topic for a blog post "
    else:
         prompt += f"blog post for the topic ""{topic}"" "

    prompt += "in english and german. \
        Write an appropriate focus keyphrase with 4 words or less. \
        Write a short SEO title and a post title. \
        Write a prompt for an image creator. \
        Write a meta description with a maximum length of 156 characters."
    
    if not (blacklist is None):
        prompt += f" Do not use a topic from the following list: {blacklist}. "

    prompt += "Use the following json format as output: \
        { \
        \"focus_keyphrase_english\": \"\", \
        \"seo_title_english\": \"\", \
        \"post_title_english\": \"\", \
        \"image_prompt_english\": \"\", \
        \"meta_description_english\": \"\", \
        \"focus_keyphrase_german\": \"\", \
        \"seo_title_german\": \"\", \
        \"post_title_german\": \"\", \
        \"meta_description_german\": \"\" \
        }"
    
    system_hint = "You are a helpful assistant that writes blog posts."
    
    return prompt, system_hint
    
def get_content_prompt(topic):
    prompt = f"Write a journalistic blog post on the following topic: \
        {topic} \
        Leave out the title and begin with an introduction. Use appropriate subtitles. \
        Use html tags for subtitles and paragraphs. \
        The focus keyphrase should appear in the introduction. \
        A paragraph should not be longer than 200 words. \
        No more than 25 percent of the sentences should have more than 20 words. \
        The post should contain a maximum of 10 percent passive sentences. \
        The post should contain at least 600 words and have enough transition words."
    
    system_hint = "You are a helpful assistant that writes blog posts."

    return prompt, system_hint

def get_translation_prompt(target_language, content):
     prompt = f"Translate the following into {target_language} and keep existing html tags: {content}"

     system_hint = "You are a helpful assistant that translates texts from english to german."

     return prompt, system_hint

The code defines four functions that perform tasks related to writing blog posts.

The function generate_text() uses the OpenAI GPT-3 API to generate text based on a prompt and system cues.

The function get_topic_prompt() generates a prompt for generating a topic for a blog post, while get_content_prompt() generates a prompt for actually writing the blog post with corresponding style cues.

Finally, the function get_translation_prompt() generates a prompt for translating text from English into another language while retaining existing HTML tags.

Read blacklist and create topic

blacklist_filename = "blacklist.txt"

if not os.path.isfile(blacklist_filename):
    with open(blacklist_filename, 'w') as f:
        f.write('')

with open("blacklist.txt", "r") as f:
    blacklist = f.read()

if(len(blacklist) > 0):
    prompt_topic = get_topic_prompt(blacklist=blacklist)
else:
    prompt_topic = get_topic_prompt()

topic_response = generate_text(prompt_topic[0], prompt_topic[1])

topic = topic_response.choices[0].message.content
topic_tokens = topic_response.usage.total_tokens

topic_json = json.loads(topic)
focus_keyphrase = topic_json["focus_keyphrase_english"]

if not os.path.exists(focus_keyphrase):
    os.makedirs(focus_keyphrase)

topic_filename = f"TOPIC_{topic_tokens}.json"
topic_file_path = os.path.join(focus_keyphrase, topic_filename)

with open(topic_file_path, "w") as f:
        f.write(json.dumps(topic_json, indent=4))

with open("blacklist.txt", "a") as f:
        f.writelines(focus_keyphrase + ";")

The code first checks if a file named blacklist.txt exists. If not, an empty file with that name is created. Then, the contents of the blacklist.txt file are read and stored in a variable. If the blacklist is not empty, an instruction to generate a prompt using that blacklist is generated. Otherwise, an instruction without a blacklist is generated.

Next, the generated prompt text is sent to GPT-3 using the generate_text() function, and the first generated response is used to extract the topic and focus keyword. Then, a folder with the name of the focus keyword is created, if it doesn’t already exist. A file with the generated topic is saved in JSON format in the newly created folder. Finally, the focus keyword is added to the blacklist by appending it to the end of the blacklist.txt file.

Generate the content

shortened_topic_json = {
    'focus_keyphrase_english': topic_json['focus_keyphrase_english'],
    'post_title_english': topic_json['post_title_english'],
    'meta_description_english': topic_json['meta_description_english']
}

prompt_content = get_content_prompt(json.dumps(shortened_topic_json))

content_response = generate_text(prompt_content[0], prompt_content[1])

content = content_response.choices[0].message.content
content_tokens = content_response.usage.total_tokens

content_filename = f"CONTENT_EN_{content_tokens}.txt"
content_file_path = os.path.join(focus_keyphrase, content_filename)

with open(content_file_path, "w") as f:
        f.write(content)     

The following code generates content for a blog post and saves it to a text file. First, a shortened JSON object is created from the original topic, containing only the keyword phrase, post title, and meta description. Then a prompt for content generation is created using this reduced topic. The generated content is saved in a text file named CONTENT_EN_<Token Count>.txt, where the token count is derived from the response of the generated content.

Translating the generated content

prompt_translation = get_translation_prompt(target_language="german", content=content)

translation_response = generate_text(prompt_translation[0], prompt_translation[1])

translation = translation_response.choices[0].message.content
translation_tokens = translation_response.usage.total_tokens

translation_filename = f"CONTENT_DE_{translation_tokens}.txt"
translation_file_path = os.path.join(focus_keyphrase, translation_filename)

with open(translation_file_path, "w") as f:
        f.write(translation)  

This code section creates a translation of the previously generated content from English to German. To do this, a text prompt is created and sent to the OpenAI service to obtain a machine translation. The result is saved in a file in the same directory as the original English version.

Creating the drafts

url = 'https://<YOUR DOMAIN>/wp-json/wp/v2/posts'
headers = {'Content-Type': 'application/json'}
auth = ('<YOUR USER>', '<YOUR PASSWORD>')

data = {
    'title': topic_json["post_title_english"],
    'content': content,
    'status': 'draft',
    'categories': '<YOUR CATEGORY ID OR LIST OF IDs>',
    'lang': 'en'
}

requests.post(url, headers=headers, auth=auth, data=json.dumps(data))

data = {
    'title': topic_json["post_title_german"],
    'content': translation,
    'status': 'draft',
    'categories': '<YOUR CATEGORY ID OR LIST OF IDs>',
    'lang': 'de'
}

requests.post(url, headers=headers, auth=auth, data=json.dumps(data))

This code section sends two POST requests to a WordPress REST API. The variable url contains the URL to the REST API. The HTTP headers to be sent in the request are defined in the variable headers, and the user’s login credentials are specified in the variable auth.

The data to be sent in the POST requests is defined in the variables data. In the first request, the English title and content of the post, the status as a draft, a category ID, and the language en are specified. In the second request, the German title and content of the post, the status as a draft, a category ID, and the language de are specified.

Conclusion

With this code, we saved a lot of work while demonstrating how to use the OpenAI API with GPT models to programmatically process ChatGPT content in a dialog-based manner.

Although we’re not quite finished with this script yet, as we still need to manually add the metadata for SEO optimization and link the translations in WordPress. In addition, the featured image still needs to be generated.

However, that’s exactly why the posts were created as drafts. This allows for fine-tuning and most importantly, the ability to review the generated content before publishing. After all, we still want to retain some control over the AI.

All in all, the OpenAI API offers a variety of use cases for automating processes with the help of artificial intelligence. With this post, we were hopefully able to show you one of many possible use cases and get you excited about the possibilities that arise from API-based access!

Implementing a simple Pong browser game with ChatGPT (Part 1)

In this blog post, we will discuss how to implement a simple Pong browser game using ChatGPT. First, we will shortly discuss the advantages and disadvantages of using ChatGPT and then introduce the code generated by ChatGPT in practice.

Vorteile

Time-saving: ChatGPT can save a lot of time as it provides a fast and easy way to generate texts or code blocks without having to write everything by hand. This is particularly useful for simple routine tasks.

Versatility: ChatGPT can be used in many different languages and contexts, making it a versatile tool. It can, for example, help with the implementation of algorithms and code documentation, as well as explain existing code.

Adaptability: ChatGPT can be adapted to specific requirements and needs and generate context-specific code.

Nachteile

Limited control: Since ChatGPT is based on machine learning, users do not have full control over what is generated. The result can sometimes be unpredictable and therefore requires additional checks.

Limited response length: ChatGPT has a maximum response length, which can be a problem when generating longer code sections or complex texts. In this case, you have to ask ChatGPT to continue, which can lead to ChatGPT “forgetting” the context – which brings us to the next disadvantage:

Limited ability to understand context: ChatGPT has difficulty understanding context and can therefore sometimes provide inaccurate or inappropriate results. It is important for users to understand how to best use ChatGPT to achieve the best possible results. Ideally, the “divide-and-conquer” principle should also be used here: the context should be kept as small as possible.

Practical example

Now let’s move on to the practical example: implementing a Pong browser game with ChatGPT.

I asked ChatGPT if it could implement a Pong game for the browser. Unfortunately, the code was longer than the maximum response length, so I had to ask ChatGPT to continue several times. I also instructed it to output only the code without description.

Here’s what the first response looked like:

<!DOCTYPE html>
<html>
  <head>
    <title>Pong Game</title>
    <style>
      canvas {
        border: 1px solid black;
      }
    </style>
  </head>
  <body>
    <canvas id="canvas" width="600" height="400"></canvas>
    <script src="pong.js"></script>
  </body>
</html>
const canvas = document.getElementById("canvas");
const context = canvas.getContext("2d");

canvas.width = 800;
canvas.height = 600;

let player1 = {
  x: 20,
  y: canvas.height / 2 - 50,
  width: 10,
  height: 100,
  score: 0,
};

let player2 = {
  x: canvas.width - 30,
  y: canvas.height / 2 - 50,
  width: 10,
  height: 100,
  score: 0,
};

let ball = {
  x: canvas.width / 2,
  y: canvas.height / 2,
  radius: 10,
  speed: 7,
  dx: 7,
  dy: -7,
};

function drawNet() {
  context.fillStyle = "#FFF";
  for (let i = 0; i <= canvas.height; i += 15) {
    context.fillRect(canvas.width / 2 - 1, i, 2, 10);
  }
}

function drawRect(x, y, w, h, color) {
  context.fillStyle = color;
  context.fillRect(x, y, w, h);
}

function drawCircle(x, y, r, color) {
  context.fillStyle = color;
  context.beginPath();
  context.arc(x, y, r, 0, Math.PI * 2, false);
  context.closePath();
  context.fill();
}

function drawText(text, x, y, color) {
  context.fillStyle = color;
  context.font = "45px fantasy";
  context.fillText(text, x, y);
}

function draw() {
  drawRect(0, 0, canvas.width, canvas.height, "#000");

  drawRect(player1.x, player1.y, player1.width, player1.height, "#FFF");

  drawRect(
    player2.x,
    player2.y,
    player2.width,
    player2.height,
    "#FFF"
  );

  drawCircle(ball.x, ball.y, ball.radius, "#FFF");

  drawNet();
}

function ballWallCollision() {
  if (ball.y + ball.radius > canvas.height || ball.y - ball.radius < 0) {
    ball.dy = -ball.dy;
  }
}

function ballPaddleCollision(player) {
  if (
    ball.x + ball.radius > player.x &&
    ball.x - ball.radius < player.x + player.width &&
    ball.y + ball.radius > player.y &&
    ball.y - ball.radius < player.y + player.height
  ) {
    let collidePoint = ball.y - (player.y + player.height / 2);
    collidePoint = collidePoint / (player.height / 2);
    let angleRad = (Math.PI / 4) * collidePoint;
    let direction = ball.x + ball.radius < canvas.width / 2 ? 1 : -1;
    ball.dx = direction * ball.speed * Math.cos(angleRad);
    ball.dy = ball.speed * Math.sin(angleRad);
  }
}

function resetBall() {
  ball.x = canvas.width / 2;
  ball.y = canvas.height / 2;
  ball.speed = 7;
  ball.dx = ball.speed;
  ball.dy = -ball.speed;
}

function drawScore() {
  context.font = "32px Arial";
  context.fillText(player1.score, 100, 50);
  context.fillText(player2.score, canvas.width - 100, 50);
}

function checkGameOver() {
  if (player1.score === maxScore || player2.score === maxScore) {
    let winner = player1.score === maxScore ? "Player 1" : "Player 2";
    alert(`${winner} wins!`);
    player1.score = 0;
    player2.score = 0;
  }
}

function update() {
  ball.x += ball.dx;
  ball.y += ball.dy;

  ballWallCollision();

  ballPaddleCollision(player1);
  ballPaddleCollision(player2);

  if (ball.x - ball.radius < 0) {
    player2.score++;
    resetBall();
    } else if (ball.x + ball.radius > canvas.width) {
    player1.score++;
    resetBall();
  }

  checkGameOver();
}

function game() {
  update();
  draw();
}

let maxScore = 3;

canvas.addEventListener("mousemove", (event) => {
  let rect = canvas.getBoundingClientRect();
  player1.y = event.clientY - rect.top - player1.height / 2;
});

canvas.addEventListener("mousedown", () => {
  if (player1.score === maxScore || player2.score === maxScore) {
  player1.score = 0;
  player2.score = 0;
  }
});

setInterval(game, 1000 / 60);

The code is basically functional: A game loop has been implemented, objects are being drawn, the ball is moving, and there is collision detection. However, ChatGPT forgot to implement the movement of the paddles. Additionally, a function called drawScore() has been implemented but it is not being called anywhere.

So, it is necessary to make some improvements:

The code is missing the movement of the paddle. The paddles should be able to move using the keyboard. Please show only the places where code needs to be modified.
function movePaddle(evt) {
  let rect = canvas.getBoundingClientRect();
  if (evt.clientY - rect.top - player1.height / 2 > 0 && evt.clientY - rect.top + player1.height / 2 < canvas.height) {
    player1.y = evt.clientY - rect.top - player1.height / 2;
  }
}

function movePaddleUp() {
  player1.y -= 10;
}

function movePaddleDown() {
  player1.y += 10;
}

document.addEventListener("keydown", function (evt) {
  switch (evt.key) {
    case "w":
      movePaddleUp();
      break;
    case "s":
      movePaddleDown();
      break;
  }
});

Here it seems like I was a bit too vague – while the paddle movement for player 1 was implemented, it is missing for player 2. So, let’s keep improving:

Please implement the same for player 2 as well. The paddle should be moved with the "arrow up" and "arrow down" keys.
function movePaddleUp(player) {
  player.y -= 10;
}

function movePaddleDown(player) {
  player.y += 10;
}

document.addEventListener("keydown", function (evt) {
  switch (evt.key) {
    case "w":
      movePaddleUp(player1);
      break;
    case "s":
      movePaddleDown(player1);
      break;
    case "ArrowUp":
      movePaddleUp(player2);
      break;
    case "ArrowDown":
      movePaddleDown(player2);
      break;
  }
});

That looks better now. Now I asked ChatGPT to implement a start button which allows the game to be started and restarted with a button click at the beginning and end of the game. Again, ChatGPT should only output the relevant changes:

<!DOCTYPE html>
<html>
  <head>
    <title>Pong Game</title>
    <style>
      canvas {
        border: 1px solid black;
      }
    </style>
  </head>
  <body>
    <canvas id="canvas" width="600" height="400"></canvas>
    <button id="startButton">Start</button>
    <script src="pong.js"></script>
  </body>
</html>
let canvas = document.getElementById("canvas");
let ctx = canvas.getContext("2d");

let startButton = document.getElementById("startButton");
startButton.addEventListener("click", startGame);

let gameStarted = false;

// Restlicher Code hier

function checkGameOver() {
  if (player1.score === maxScore || player2.score === maxScore) {
    let winner = player1.score === maxScore ? "Player 1" : "Player 2";
    alert(`${winner} wins!`);
    player1.score = 0;
    player2.score = 0;
    startButton.disabled = false;
    gameStarted = false;
    }
}

function startGame() {
  gameStarted = true;
  startButton.disabled = true;
}

function game() {
  if (!gameStarted) {
    return;
  }
  update();
  draw();
}

The game is now basically playable. However, in terms of usability and code itself, we still have some way to go to reach our goal. But let’s ask ChatGPT what he would optimize in the code:

The movement of the paddles is limited to moving up or down by 10 pixels. This can result in relatively slow movement. To improve the responsiveness of the game, one could increase the step by which the paddles move to make them move faster.

The game ends abruptly as soon as a player reaches the maximum score. However, it could be more meaningful to give players a chance to know who won the last game before the game is fully over. One way to do this would be to pause the game for a certain number of seconds and display the name of the winner on the screen.

The code could be better organized by splitting some of the functions into separate files. For example, all functions related to drawing objects on the canvas could be placed in one file and imported from there. This would make the main code more readable and increase maintainability.

Consider using requestAnimationFrame() to update the game loop instead of calling setInterval or setTimeout. requestAnimationFrame() is more efficient and provides better synchronization with the browser's rendering pipeline.

That sounds reasonable. However, the second point is not entirely correct – an alert() is currently called after the end of the game. But that can definitely be implemented in a more user-friendly way.

Of course, the design could also be responsive to be playable on different devices, a consistent use of “let” or “const” would be desirable. Also, there are no comments at all.

Conclusion

In summary, ChatGPT offers both advantages and disadvantages when it comes to generating program code. On the positive side, it can save time, be used in a versatile manner, and enable automated processes. On the other hand, it may have difficulties in understanding context, have limited response lengths, and offer limited control over the generated results.

To achieve the best possible results, it is important for the user to clearly define the context and guide ChatGPT through targeted requests. It is also helpful to fine-tune the output to specific requirements and needs, and carefully review the generated results. With these tips, ChatGPT can be a valuable tool for generating text and code blocks, as well as automating complex tasks.

Furthermore, as mentioned above, ChatGPT can analyze existing code and suggest optimizations.

So ChatGPT and I will continue to work on the code and keep you updated!

My Journey to Blogging: News-Page Optimization

In the article “Generating a AI news page with ChatGPT and GDELT“, I have already shown how it is possible in principle to generate a simple topic-specific news website.

Today, I want to implement a small optimization for the news page with the help of ChatGPT. Currently, similar news or even news with exactly the same title are displayed.

Here’s the chat log:

The following PHP code needs to be adjusted so that if titles are duplicated, only the first data set is used. In addition, only a maximum of 5 data sets should be output.
function display_ai_news_data() {
	$url = 'https://api.gdeltproject.org/api/v2/doc/doc?format=html&timespan=24H&query=Artificial%20Intelligence%20sourcelang:eng&mode=artlist&maxrecords=5&format=json&sort=hybridrel';

	$response = wp_remote_get( $url );

	if ( is_array( $response ) ) {
		$headers = $response['headers'];
		$body    = json_decode( $response['body'], true );

		if ( ! empty( $body ) && isset( $body['articles'] ) ) {
			$articles = $body['articles'];

			foreach ( $articles as $article ) {
				$title = $article['title'];
				$url   = $article['url'];
				$image = $article['socialimage'];

				echo '<div>';
				echo '<h2>' . $title . '</h2>';
				echo '<img src="' . $image . '" />';
				echo '<p><a href="' . $url . '">Read more</a></p>';
				echo '</div>';
			}
		}
	}
}
Here is an updated version of the code that only uses the first data set for each duplicate title and outputs a maximum of 5 data sets:
function display_ai_news_data() {
    $url = 'https://api.gdeltproject.org/api/v2/doc/doc?format=html&timespan=24H&query=Artificial%20Intelligence%20sourcelang:eng&mode=artlist&maxrecords=5&format=json&sort=hybridrel';

    $response = wp_remote_get( $url );

    if ( is_array( $response ) ) {
        $headers = $response['headers'];
        $body    = json_decode( $response['body'], true );

        if ( ! empty( $body ) && isset( $body['articles'] ) ) {
            $articles = $body['articles'];
            $used_titles = array();

            foreach ( $articles as $article ) {
                $title = $article['title'];
                if (in_array($title, $used_titles)) {
                    continue; // Skip articles with already used title
                }
                $used_titles[] = $title; // Mark this title as used
                $url   = $article['url'];
                $image = $article['socialimage'];

                echo '<div>';
                echo '<h2>' . $title . '</h2>';
                echo '<img src="' . $image . '" />';
                echo '<p><a href="' . $url . '">Read more</a></p>';
                echo '</div>';

                if (count($used_titles) >= 5) {
                    break; // Limit reached, stop the loop
                }
            }
        }
    }
}
Can you rephrase this to select 5 random data sets to display from articles?
Yes, I can do that! Here is an updated version of the code that selects 5 data sets at random from articles:
function display_ai_news_data() {
    $url = 'https://api.gdeltproject.org/api/v2/doc/doc?format=html&timespan=24H&query=Artificial%20Intelligence%20sourcelang:eng&mode=artlist&maxrecords=50&format=json&sort=hybridrel';

    $response = wp_remote_get( $url );

    if ( is_array( $response ) ) {
        $headers = $response['headers'];
        $body    = json_decode( $response['body'], true );

        if ( ! empty( $body ) && isset( $body['articles'] ) ) {
            $articles = $body['articles'];

            shuffle($articles); // randomly shuffle the array

            $used_titles = array();
            $count = 0;

            foreach ( $articles as $article ) {
                if ($count >= 5) {
                    break; // limit reached
                }
                $title = $article['title'];
                if (in_array($title, $used_titles)) {
                    continue; // Skip articles with already used title
                }
                $used_titles[] = $title; // Mark this title as used
                $url   = $article['url'];
                $image = $article['socialimage'];

                echo '<div>';
                echo '<h2>' . $title . '</h2>';
                echo '<img src="' . $image . '" />';
                echo '<p><a href="' . $url . '">Read more</a></p>';
                echo '</div>';

                $count++;
            }
        }
    }
}
The code uses the shuffle() function to randomly shuffle the articles array. Then a foreach loop is used to iterate through each article in the random order. The code also uses the $used_titles array to check if the title has already been used. If the title has already been used, the data set is skipped. Otherwise, the data set is output and the title is marked as used. When the maximum of 5 data sets is reached, the loop stops.

I hope that helps!

I was able to use this code directly as well.

ChatGPT even explained the code changes to me and increased the value of maxrecords in the API call from 5 to 50 based on the context, even though I didn’t explicitly ask for it.

In summary, ChatGPT provides great support for code optimizations. Thanks to its abilities in machine learning and artificial intelligence, ChatGPT can quickly and accurately respond to questions and requests related to programming and offer customized solutions.

From Chaos to Structure: Why Markdown is Essential Even in the Kitchen

As a chatbot with a skill for formatted Markdown, I, ChatGPT, can help you convert raw text into a structured and organized format. While this may not be directly related to programming, it is a useful tool for developers when documenting code or projects.

Markdown is a simple and intuitive language for formatting text that is widely used in the programming community. With Markdown, you can quickly and easily create text formatting such as headings, lists, links, and images without having to deal with HTML tags or other complicated markup languages.

As a chatbot, I can help you generate Markdown code for various types of texts such as recipes, tutorials, lists, or even blog posts. Whether you are a developer looking to improve your code documentation or just someone who needs a better structure for your texts, Markdown can help you.

However, if your input text is longer, I ask that you break the text into smaller parts and make a separate request for each part, as my input and output capacities are currently limited.

Overall, the ability to convert raw text into Markdown code is a helpful feature that helps create clear and structured documents.

Give it a try and let my service convince you!

To prove that these are not empty promises, here is an example of converting a German recipe:

Can you create Markdown code for the following recipe for me?

8	Rinderroulade(n)
5	Zwiebel(n)
4	Gewürzgurke(n)
4 EL	Senf, mittelscharfer
12 Scheibe/n	Frühstücksspeck
2 EL	Butterschmalz
1 Stück(e)	Knollensellerie
1	Möhre(n)
½ Stange/n	Lauch
½ Flasche	Rotwein, guter
Salz und Pfeffer
½ Liter	Rinderfond, kräftiger
TL	Speisestärke
1 Schuss	Gurkenflüssigkeit


Zubereitung

Arbeitszeit ca. 1 Stunde
Koch-/Backzeit ca. 2 Stunden
Gesamtzeit ca. 3 Stunden

Die Rinderrouladen aufrollen, waschen und mit Küchenkrepp trockentupfen. Zwiebeln in Halbmonde, Gurken in Längsstreifen schneiden. Schere und Küchengarn bereitstellen.
Die ausgebreiteten Rouladen dünn mit Senf bestreichen, salzen und pfeffern. Auf jede Roulade mittig in der Länge ca. 1/2 Zwiebel und 1 1/2 Scheiben Frühstücksspeck sowie 1/2 (evtl. mehr) Gurke verteilen. Nun von beiden Längsseiten etwas einschlagen, dann aufrollen und mit dem Küchengarn wie ein Postpaket verschnüren.
In einer Pfanne das Butterschmalz heiß werden lassen und die Rouladen dann rundherum darin anbraten. Herausnehmen und in einen Schmortopf umfüllen.
Den Sellerie, die restliche Zwiebel, Lauch und die Möhren kleinschneiden und in der Pfanne anbraten. Sobald sie halbwegs "blond" sind, kurz rühren. Eine sehr dünne Schicht vom Rotwein angießen, nicht mehr rühren und die Flüssigkeit verdampfen lassen. Sobald das Gemüse dann wieder trockenbrät, wieder eine Schicht Wein angießen, kurz rühren und weiter verdampfen lassen. Dies wiederholen, bis die 1/2 Flasche Wein aufgebraucht ist. Auf diese Art wird das Röstgemüse sehr braun (gut für den Geschmack und die Farbe der Soße), aber nicht trocken. Am Schluss mit dem Rinderfond, etwas Salz und Pfeffer und einem guten Schuss Gurkensud auffüllen und dann in den Schmortopf zu den Rouladen geben. Den Topf entweder auf kleiner Flamme oder bei ca. 160 °C Ober-/Unterhitze im heißen Backofen für 1 1/2 Stunden schmoren lassen. Ab und zu evtl. etwas Flüssigkeit zugießen.
Nach 1 1/2 Stunden testen, ob die Rouladen weich sind (einfach mal mit den Kochlöffel ein bisschen draufdrücken, sie sollten sich willig eindrücken lassen - wenn nicht, nochmal eine halbe Stunde weiterschmoren). Dann vorsichtig aus dem Topf heben, warmstellen.
Die Soße durch ein Sieb geben, aufkochen. Ca. 1 EL Senf mit etwas Wasser und der Speisestärke gut verrühren, in die kochende Soße nach und nach unter Rühren eingießen, bis die gewünschte Konsistenz erreicht ist. Die Soße evtl. nochmal mit Salz, Pfeffer, Rotwein, Gurkensud abschmecken.

Pro Portion 830 Kcal
# Rinderrouladen

- 8 Rinderrouladen
- 5 Zwiebeln
- 4 Gewürzgurken
- 4 EL mittelscharfer Senf
- 12 Scheiben Frühstücksspeck
- 2 EL Butterschmalz
- 1 Knollensellerie
- 1 Möhre
- 1/2 Stange Lauch
- 1/2 Flasche guter Rotwein
- Salz und Pfeffer
- 1/2 Liter kräftiger Rinderfond
- 1 TL Speisestärke
- 1 Schuss Gurkenflüssigkeit

## Zubereitung

1. Rinderrouladen waschen und mit Küchenkrepp trockentupfen. Zwiebeln in Halbmonde, Gurken in Längsstreifen schneiden. Schere und Küchengarn bereitstellen.
2. Die Rouladen dünn mit Senf bestreichen, salzen und pfeffern. Auf jede Roulade mittig in der Länge ca. 1/2 Zwiebel und 1 1/2 Scheiben Frühstücksspeck sowie 1/2 (evtl. mehr) Gurke verteilen. Nun von beiden Längsseiten etwas einschlagen, dann aufrollen und mit dem Küchengarn verschnüren.
3. In einer Pfanne das Butterschmalz heiß werden lassen und die Rouladen rundherum darin anbraten. Herausnehmen und in einen Schmortopf umfüllen.
4. Sellerie, restliche Zwiebel, Lauch und Möhren kleinschneiden und in der Pfanne anbraten. Eine sehr dünne Schicht vom Rotwein angießen und die Flüssigkeit verdampfen lassen. Sobald das Gemüse dann wieder trockenbrät, wieder eine Schicht Wein angießen, kurz rühren und weiter verdampfen lassen. Dies wiederholen, bis die 1/2 Flasche Wein aufgebraucht ist. Mit Rinderfond, etwas Salz und Pfeffer und einem guten Schuss Gurkensud auffüllen und in den Schmortopf zu den Rouladen geben.
5. Den Topf entweder auf kleiner Flamme oder bei ca. 160 °C Ober-/Unterhitze im heißen Backofen für 1 1/2 Stunden schmoren lassen. Ab und zu evtl. etwas Flüssigkeit zugießen.
6. Nach 1 1/2 Stunden testen, ob die Rouladen weich sind. Dann vorsichtig aus dem Topf heben, warmstellen.
7. Die Soße durch ein Sieb geben, aufkochen. Ca. 1 EL Senf mit etwas Wasser und der Speisestärke gut verrühren, in die kochende Soße nach und nach unter Rühren eingießen, bis die gewünschte Konsistenz erreicht ist. Die Soße evtl. nochmal mit Salz, Pfeffer, Rotwein, Gurkensud abschmecken.

*Arbeitszeit: ca. 1 Stunde*
*Koch-/Backzeit: ca. 2 Stunden*
*Gesamtzeit: ca. 3 Stunden*

*Pro Portion: 830 Kcal*

My Journey to Blogging: Generating an AI news page with ChatGPT and GDELT

Let’s start with the first practical example of ChatGPT’s ability to generate code. The scenario is an embedded news page on a blog about AI topics that retrieves its content through a web API.

As a developer, you may be familiar with the problem of needing a specific code but having to search for it on the internet. In such cases, ChatGPT can be a great resource to help you generate sample code that can be adapted to your specific requirements.

It is important to note that general programming knowledge is required to use ChatGPT for code generation. You need to know how to integrate and customize the generated code in your project to achieve the desired result. You may also need to troubleshoot if errors occur. Nevertheless, ChatGPT can be a useful tool for developers who lack the necessary knowledge to write the code themselves.

Now let’s get to the practical part: Creating a news page with content from the AI field.

One way to do this is to use GDELT to generate dynamic news content on a website. With ChatGPT, you can quickly and easily generate code for API calls and even learn how to integrate them into WordPress in this example.

GDELT stands for “Global Database of Events, Language, and Tone” and is a platform that monitors and analyzes thousands of news sources worldwide. The platform offers various APIs that developers can use to access and use this content in their applications.

One of these APIs is GDELT Summary, which provides a curated summary of the most important news events in real-time. The summary is updated every 15 minutes and covers a wide range of topics, including politics, economics, science, technology, and more.

There, you can configure and test your API call through an interface:

On the result page, you can view the corresponding JSON output. The address bar shows the string for the API call, and you may need to adjust the URL parameters (such as maxrecords for the number of articles) to suit your needs.

And now, enough prelude – here’s the excerpt from the chat:

How can I generate dynamic content in my WordPress blog? I want to call a web API that returns content in JSON format. To generate dynamic content in your WordPress blog, you can create a function in your functions.php file that calls the web API and retrieves the data in JSON format. Then, you can display the returned data in your WordPress blog.
Here's an example code that can show you how to do it:
<?php
  $api_data = get_api_data();
  if( $api_data ) {
    foreach( $api_data as $item ) {
      echo '<h2>' . $item->title . '</h2>';
      echo '<p>' . $item->content . '</p>';
    }
  }
?>
The function get_api_data calls the web API and returns the data in JSON format. You can use this function to display the data in your WordPress blog.

For example, you can use the following code in your template file to display the content from the web API:
<?php
  $api_data = get_api_data();
  if( $api_data ) {
    foreach( $api_data as $item ) {
      echo '<h2>' . $item->title . '</h2>';
      echo '<p>' . $item->content . '</p>';
    }
  }
?>
In this example, we assume that the web API returns a list of articles, with each article having a title and content. The loop iterates through each item in the list and displays the title and content of each article.

Note: When retrieving data from external sources, it is important to ensure that the data is reliable and does not contain any harmful content. It is also important to ensure that you retrieve the APIs in a secure manner, such as by using SSL and authentication, if necessary.
This is what the call looks like:

https://api.gdeltproject.org/api/v2/doc/doc?format=html&timespan=24H&query=Artificial%20Intelligence%20sourcelang:eng&mode=artlist&maxrecords=1&format=json&sort=hybridrel 

and the following JSON is returned:

{"articles": [ { "url": "https://investorplace.com/2023/02/the-future-of-ai-stocks-7-companies-to-keep-your-eye-on/", "url_mobile": "", "title": "The Future of AI Stocks : 7 Companies to Keep Your Eye On", "seendate": "20230228T161500Z", "socialimage": "https://investorplace.com/wp-content/uploads/2023/02/artificial-intelligence-ai-green-1600.jpg", "domain": "investorplace.com", "language": "English", "sourcecountry": "United States" }] }
Assuming you have the example JSON data provided, you can display the data in your WordPress blog as follows:
<?php
  $url = 'https://api.gdeltproject.org/api/v2/doc/doc?format=html&timespan=24H&query=Artificial%20Intelligence%20sourcelang:eng&mode=artlist&maxrecords=1&format=json&sort=hybridrel';
  $response = wp_remote_get( $url );
  if( is_wp_error( $response ) ) {
    echo 'Fehler beim Abrufen der Daten';
  } else {
    $data = json_decode( wp_remote_retrieve_body( $response ) );
    if( $data && !empty( $data->articles ) ) {
      foreach( $data->articles as $article ) {
        echo '<h2>' . $article->title . '</h2>';
        echo '<img src="' . $article->socialimage . '">';
        echo '<p>' . $article->seendate . '</p>';
        echo '<p>' . $article->url . '</p>';
      }
    } else {
      echo 'Keine Artikel gefunden';
    }
  }
?>
In this example, the wp_remote_get function is used to retrieve data from the specified URL. Then, an error check is performed and the JSON data is decoded and looped through to display the data.

In this case, the article title, social image, publish date, and article URL are displayed. However, you can customize the display to show the desired data.
How does the API call work through functions.php?
To implement the API call in your WordPress site through the functions.php file, you can use the following code:
function display_api_data() {
    $url = 'https://api.gdeltproject.org/api/v2/doc/doc?format=html&timespan=24H&query=Artificial%20Intelligence%20sourcelang:eng&mode=artlist&maxrecords=1&format=json&sort=hybridrel';
    $response = wp_remote_get( $url );
    if ( is_array( $response ) ) {
        $headers = $response['headers'];
        $body    = json_decode( $response['body'], true );
        if ( ! empty( $body ) && isset( $body['articles'] ) ) {
            $articles = $body['articles'];
            foreach ( $articles as $article ) {
                $title = $article['title'];
                $url   = $article['url'];
                $image = $article['socialimage'];
                echo '<div>';
                echo '<h2>' . $title . '</h2>';
                echo '<img src="' . $image . '" />';
                echo '<p><a href="' . $url . '">Read more</a></p>';
                echo '</div>';
            }
        }
    }
}
This code calls the API URL and outputs the JSON data. You can customize the code to generate the content of your page by modifying the foreach loop according to your requirements and adjusting the content of the echo statements.

To execute the code, you can embed the function display_api_data() in a WordPress page or post using the shortcode [api_data]. To do this, you need to add the following code to your functions.php file:
function api_data_shortcode() {
    ob_start();
    display_api_data();
    return ob_get_clean();
}
add_shortcode( 'api_data', 'api_data_shortcode' );
This code registers a shortcode [api_data] which executes the display_api_data() function and outputs the generated content on the page. You can use the shortcode on any WordPress page or post by entering the following code into the editor:

[api_data]
Where can I find the functions.php file?
The functions.php file is located in the active WordPress theme's folder and contains functions and filters that can be used on the website. The path to the functions.php file is usually:

/wp-content/themes/your-theme/functions.php

Replace your-theme with the name of your active WordPress theme. If you are using the default theme "Twenty Twenty-One", the path to the functions.php file is:

/wp-content/themes/twentytwentyone/functions.php

You can access the functions.php file through the dashboard of your WordPress website. To do this, go to the "Appearance" menu and click on "Theme Editor". Then, select the active theme from the dropdown list at the top right and select the functions.php file from the list of theme files on the right-hand side. You can edit and save the code directly in the editor. However, note that an error in the code can damage or render your website unusable when you edit the functions.php file. It is recommended to create a backup of the file before making changes.

In this article, we have shown how to generate a function in the functions.php file of WordPress using ChatGPT that calls the GDELT API and retrieves the latest news. You can see the result on the news page.

It is important to note that when using supported programming with ChatGPT, you can and often need to optimize the program code in the ChatGPT dialog. There will be a separate post on tips and tricks for using ChatGPT for code generation.

Overall, the use of ChatGPT can increase the efficiency of your work as a developer by helping you generate example code quickly and easily. Whether it is about integrating APIs or other use cases, ChatGPT can be a useful tool to save time and speed up your development work.