Computers


Improving WiFi reception with an ESP32-WROOM-32U  

I’m using a ESP32 with ESPHome connected to my heating system for climate control, as expained in a previous post.

The heating system is in a different builng than the router and I was experiencing some WiFi coverage issues (the WiFi signal needs to cross two metallic window blinds…).

To diagnose the WiFi coverage is very useful the wifi_signal sensor in ESPHome:

sensor:
  - platform: wifi_signal
    name: Wifi Signal
    update_interval: 60s

It was showing a WiFi signal of -95 dBm in the board: This is very low, and it was experiencing some disconnections.

Usually the ESP32 boards have an antenna integrated in the board, but the ESP32-WROOM-32U has an IPEX connector for an external antenna:

So, I spent less than 10 EUR in Aliexpress buying (affiliate links):

And replaced the previous ESP32-WROOM-32 module with an ESP32-WROOM-32U, installing the external antenna. This is how it looks now:

ESP32-WROOM-32U mounted on an expasion board connected to the 4 relay module

The WiFi signal shown in ESPHome increased from -95 dBm to –75 dBm and it’s no longer experiencing any interruptions.


WireGuard

I always used OpenVPN in my servers, but now WireGuard is a better option:

https://www.wireguard.com/

  • It’s simpler
  • It’s more efficient
  • It’s faster
  • It uses modern cryptography algorithms

I’m using it to remotely access private services in my home server. I setup a star topology, where all the VPN clients connect to the home server and they can only see the server.

So I need a dynamic DNS and an open port in the router, I already have them for Home Assistant.

Eloy Coto recommended Tailscale, it is an amazing mesh VPN based in WireGuard. It’s much simpler to set up, and you do not need to open public ports, but it’s commercial and a bit overkill for my needs.

Generating the WireGuard configurations

The most tedious part of WireGuard is to generate the configurations, but there are some nice tools to ease that, like:

https://www.wireguardconfig.com/

The tool generates the configuration for the server and for the requested number of clients. It does everything in the frontend, so it is not leaking the VPN keys.

As I’m only acessing the server, I have removed the IP forwarding options in the Post-Up and Post-Down rules.

Installing and configuring the WireGuard server

WireGuard is in the official Ubuntu repos, so to install it in the server it’s enough to do:

sudo apt install wireguard

Then I needed to put the config in the /etc/wireguard/wg0.conf file and do:

sudo systemctl enable wg-quick@wg0.service
sudo systemctl start wg-quick@wg0.service

Installing and configuring the clients

WireGuard has clients for almost any OS:

https://www.wireguard.com/install/

To setup the client in the phones, the WireGuard Config web tool generates QR codes. In other devices you’ll need to create a file with it or paste the config contents.

Using Pi-hole from the VPN clients

To use the Pi-hole hosted in the same VPN server from the VPN clients, you can specify a DNS property in the client config, i.e. if the server is 100.100.1.1 and the client is 100.100.1.2:

[Interface]
PrivateKey = <client-private-key>
Address = 100.100.1.2/32
DNS = 100.100.1.1

[Peer]
PublicKey = <server-public-key>
PresharedKey = <preshared-key>
Endpoint = <my-home-server>:51820
AllowedIPs = 0.0.0.0/0, ::/0
PersistentKeepalive = 25

Every time that you connect the VPN, the DNS server in the client changes to 100.100.1.1 and it is reverted to the previous DNS server when the VPN is disconnected.

Additionally, Pi-hole needs to be listening in the wg0 interface, I explained how to make Pi-hole listen on multiple interfaces in the Pi-hole post.


The LLM revolution

I took this photo of three Llamas in Machu Picchu some years ago…

ChatGPT was launched in November 2022, and it changed our world as we knew it. Since then, Large Language Models (LLMs) have integrated into our daily workflows enhancing our productivity and the quality of our work.

Another interesting milestone happened in February 2023, when Meta released the Llama LLM under a noncommercial license:

https://llama.meta.com

This sparked the enthusiasm among numerous developers dedicated to advancing LLMs, leading to a increase in collaborative efforts and innovation within the field. A good example is the Hugging Face Model Hub where new models are constantly published:

https://huggingface.co/models

Developers started creating improved models and optimizing performance for local execution of LLMs on consumer-grade hardware.

Llama.cpp is a port of Llama to C++, started in March 2023 with a strong emphasis on performance and portability. It includes a web server and an API:

https://github.com/ggerganov/llama.cpp

Mistral 7B was released in October 2023, achieving better performance than larger Llama models and demonstrating the effectiveness of LLMs in compressing knowledge.:

https://huggingface.co/papers/2310.06825

And now it’s easier than ever to locally execute LLMs, especially since November 2023, with the Llamafile project that packs Llama.cpp and a full LLM into a multi-OS single executable file:

https://github.com/Mozilla-Ocho/llamafile

The llama.cpp web interface running Mistral 7B Instruct in local via a llamafile

It’s even possible to run LLMs in a Raspberry Pi 4, like the TinyLlama-1.1B used from a llamafile in this project:

https://github.com/nickbild/local_llm_assistant

And about using LLMs for code generation (Github’s Copilot has been available since 2021), there are IntelliJ plugins like CodeGPT (with its first release in February 2023) that now allows you to run the code generation against a local LLM (running under llama.cpp):

https://github.com/carlrobertoh/CodeGPT

Google is a bit late to the party. In December 2023 they announced Gemini. In February 2024, they launched the Gemma open models, based on the same technology than Gemini:

https://blog.google/technology/developers/gemma-open-models

They also released a gemma.cpp inference engine:

https://github.com/google/gemma.cpp

And finally, if you are lost among so many LLM models, an interesting resource is the Chatbot Arena, released in August 2023. It allows humans to compare the results from different LLMs, keeping a leaderboard with chess-like ELO ratings:

https://chat.lmsys.org

And according to this leaderboard, at the moment GPT-4 is still the king.


Opening Home Assistant to Internet

To make Google Assistant work with your Home Assistant, you need to provide a public URL with HTTPS access to HA. Here are the full instructions:

https://www.home-assistant.io/integrations/google_assistant/

But something that seems trivial, like publicly accessing services in your home server, has some complications, and you usually need to worry about dynamic IPs and security.

What do we need:

  • An ISP not using CG-NAT
  • Redirect ports in the router
  • A dynamic DNS provider and a client to update the IP (or a static IP)
  • An SSL certificate to securely access the HTTP services

ISP providers with CG-NAT

Some ISPs use CG-NAT (Carrier-Grade NAT), sharing the same IPv4 among multiple customers. In that case the only way to expose your services is using reverse proxy services such as ngrok.

Ngrok allows you to generate one static domain and it also automatically generates a SSL certificate, so most steps in this post do not apply.

My ISP (O2 Spain) assigns me a dynamic IP, and I prefer to not rely on these reverse proxy services, so I remotely access my home server redirecting ports in the router.

Dynamic DNS provider

Usually, and unless you have a static IP service (not very common, and not available in my ISP), you need to setup a dynamic DNS service.

I have been using the free Now-DNS service for years:

https://now-dns.com/

And to update the IP in my home server, I setup ddclient with this /etc/ddclient.conf file:

ssl=yes
protocol=dyndns2
daemon=60
mail=root                               # mail all msgs to root
mail-failure=root                       # mail failed update msgs to root
pid=/var/run/ddclient.pid               # record PID in file.
use=web, web=now-dns.com/ip             # get ip from server.
server=now-dns.com                      # default server
login=<your-login>
password=<your-password>
server=now-dns.com,<your-dynamic-domain>

Some of these dynamic DNS domains are blocked in the Pi-hole blocking lists, so, if you are using Pi-hole or other DNS blocking service, you’ll probably need to whitelist your domain.

SSL certificate

With the amazing Certbot you can obtain free SSL certificates:

https://certbot.eff.org/

There is extensive documentation in the Certbot site about how to use it. I simply install certbot from apt and do a:

certbot certonly --webroot -w /var/www/html/ -d <your-dynamic-domain> --email <my-email> --non-interactive --agree-tos

But in order to make that work, you need a domain name (available from the dynamic DNS provider in the previous section).

HTTP Server

And to verify that the domain points to your server, Certbot is going to do an HTTP request to that domain, so you also need to have an HTTP server in the port 80 and open the port 80 in the router. This is also needed for the certificate renewals.

You may encounter numerous attacks on this port, so it is crucial to have a reliable web server that is consistently updated and properly configured. I personally use nginx as my HTTP server, and it has never failed me so far.

Home Assistant

To use the SSL certificate from the HA container, we need to share the folder where certificates are stored passing a “-v /etc/letsencrypt:/etc/letsencrypt” to the docker command and setting in the HA configuration.yaml:

http:
  ssl_certificate: /etc/letsencrypt/live/<your-dynamic-domain>/fullchain.pem
  ssl_key: /etc/letsencrypt/live/<your-dynamic-domain>/privkey.pem

You can also use your public HA URL to remotely access it and to configure in the HA Android application.


Orange Pi 3B

I’ve never been a fan of the Raspberry Pi. In my opinion, it occupies an intermediate position where it is too underpowered for desktop use and too overpowered for IoT projects:

  • To use them as a desktop, there are great X86 alternatives available at about the same price than a RPi 5 but much more powerful, such as the Intel N100.
  • And for IoT projects, the ESP32 is the king, with amazing boards with Wifi, Bluetooth, etc., all at a price of less than 5 euros.

So it’s place may be TV boxes (where I prefer a Chromecast with Android) or small servers where the power consumtion is important because they are always on.

I bought an Orange Pi 3B: 4 cores, 4GB RAM, 64GB eMMC (~50 euros in Aliexpress) to replace my old X86 home server (Intel N450: 2 cores, 2 GB RAM, 64GB SSD):

http://www.orangepi.org/html/hardWare/computerAndMicrocontrollers/details/Orange-Pi-3B.html

The Orange Pi 3B shares the form factor with the Raspberry Pi 3B but it is almost as powerful as the Raspberry Pi 4. Notably, the Orange Pi 3B comes with several advantages over the RPi 4:

  • Support for eMMC (much faster and reliable than SD cards)
  • A power button
  • A full-size HDMI port
  • External antenna
  • And it’s cheaper

I installed the Ubuntu Jammy server image in the eMMC following the OPi manual. It needs to use a USB-A male to USB-A male cable and the RKDevTool (it’s in Chinese) that runs only in Windows.

And, as this machine is going to be exposed to internet, I hardened a bit the security:

  • Changed the APT repositories to ports.ubuntu.com
  • Regenerated SSH server keys
  • Removed SSH root access
  • Changed passwords
  • Renamed the orangepi user
  • Removed the local autologin

To remove the local autologin we need to edit:

  • /lib/systemd/system/getty@.service.d/override.conf: For the display console autologin
  • /lib/systemd/system/serial-getty@.service.d/override.conf: For the serial console autologin
[Service]
ExecStartPre=/bin/sh -c 'exec /bin/sleep 10'
ExecStart=
ExecStart=-/sbin/agetty --noissue --autologin orangepi %I $TERM
Type=idle

Removing the “–autologin orangepi”. If you rename the orangepiuser but you want to keep the autologin, you’ll also need to change the username here.

Then I moved the docker containers and other services from my old X86 server:

  • Home Assistant (docker container)
  • ESPHome dashboard (docker container)
  • Pi-hole (docker container)
  • nginx (for certbot and DNS DoT for Pihole)
  • certbot (to maintain the SSL certificate for Home Assistant)
  • ddclient (dynamic DNS updater)
  • NAS (do not expect anything fancy, I access a USB disk via SSH, it’s enough for Kodi & backups)

Everything seems to work smoothly now.


Pi-hole as home DNS and DHCP server

I encountered numerous issues with my network provider’s router DHCP. Since I haven’t yet decided to acquire another router, I opted to offload the DHCP server to another machine, which is currently running my Home Assistant and NAS.

I was in search of a DHCP server with a web UI. During my exploration, I came across Pi-hole, a DNS server specifically designed to block DNS queries to domains that serve ads and do tracking. Interestingly, Pi-hole also incorporates an integrated DHCP server (dnsmasqd) that can be configured through its admin UI.

https://pi-hole.net/

I presume the integration of the DHCP server aimed to simplify the setup of clients’ DNS servers, yet it proves highly convenient for home networks. And forget about the “Pi” in the name, it can be run in any linux server, not necessarily in a Raspberry Pi.

I’m still an addict to running everything in Docker containers. So I set up the Docker Pi-hole container (https://github.com/pi-hole/docker-pi-hole) using this script localed at /usr/local/pihole/docker.sh:

#!/bin/bash 
cd $(dirname $(readlink -f $0))
docker stop pihole
docker rm pihole
docker pull pihole/pihole:latest
docker run -d \
	--name pihole \
	--privileged \
	--restart=unless-stopped \
	--network=host \
	-e TZ=Europe/Madrid \
        -e FTLCONF_LOCAL_IPV4=192.168.1.2 \
        -e WEB_PORT=8081 \
	-e WEBPASSWORD=admin \
	-e INTERFACE=eth0 \
	-e DNSMASQ_USER=root \
	-v ./etc-pihole:/etc/pihole \
	-v ./etc-dnsmasq.d:/etc/dnsmasq.d \
	--cap-add=NET_ADMIN \
	pihole/pihole:latest
docker image prune --all
  • Every time that you run the script, it updates the container with the last Pi-hole version
  • It didn’t work without setting FTLCONF_LOCAL_IPV4 to the local IP
  • I needed to set up WEB_PORT to not override with the nginx running in that machine (for Certbot)
  • Setting WEBPASSWORD is the easiest way to initially setup an admin password
  • I couldn’t make the DHCP server work with port mappings, it needed a –network=host
  • There is an image prune at the end to save space by removing old docker images

I also had some problems because Ubunt’s systemd-resolved includes a DNS server, and I needed to disable it:

https://askubuntu.com/questions/907246/how-to-disable-systemd-resolved-in-ubuntu

And of course, you need to disable also the DHCP server on the router, it’s a very bad idea to have two DHCP servers working in the same network…

It is now functioning smoothly, and the included ad-blocking feature is a definite plus. Although it doesn’t currently block ads on YouTube and Twitch, its still great.

I’m also using it in my phone with a Wireguard VPN (it maybe a topic for another post). To make it listen in multiple interfaces like in the local and the VPN interfaces, I needed to create a /usr/local/pihole/etc-dnsmasq.d/99-interfaces.conf adding there:

interface=lo
interface=wg0

Another similar alternative worth exploring is AdGuard Home, but I haven’t had the time to test it yet:

https://adguard.com/en/adguard-home/overview.html


Climate control with ESPHome and Home Assistant

Two years ago I started to need controlling my home heating system while I’m not at home. I could go the easy way and buy a couple Nest thermostats, but I preferred the DIY way.

Connecting the boiler to the ESP32 via the relay module

ESP32 board with ESPHome

I connected the boiler to a ESP-WROOM-32 board via a relay module. The box and the cables were more expensive than the board (~10 EUR) and the relay module (~5 EUR).

The ESP32 board is running ESPHome: https://esphome.io/. I think it is a very nice project and very easy to setup. All the configuration is done via YAML files. The board is connected to the home WiFi and it has a fallback hotspot.

My home heating system has two radiating floor zones with two independent pumps. I also decided to automate the boiler’s “Winter mode”, in this mode the boiler heats the water for the heating, and I wanted to disable it when the heating is not working.

ESPHome has a nice web UI

In my case I needed to activate the winter mode when any pump is working and keep it working for a period of time after the pump is off.

This is my ESPHome YAML config:

substitutions:
  devicename: heating
  friendly_name: Heating

esphome:
  name: ${devicename}
  friendly_name: ${friendly_name}
  platform: ESP32
  board: nodemcu-32s

logger:

api:
  password: ""

ota:
  password: ""

wifi:
  ssid: !secret wifi_ssid
  password: !secret wifi_password
  reboot_timeout: 90s

  ap:
    ssid: ${friendly_name} Fallback Hotspot
    password: !secret wifi_password

captive_portal:

web_server:
  port: 80

debug:

time:
  - platform: homeassistant
    id: homeassistant_time

sensor:
  - platform: uptime
    name: Uptime
    filters:
      - lambda: return x / 60.0;
    unit_of_measurement: minutes

  - platform: wifi_signal
    name: Wifi Signal
    update_interval: 60s

script:
  - id: keep_winter_mode_on
    mode: restart
    then:
      - logger.log: "Keep Winter mode start"
      - if:
          condition:
            and:
              - switch.is_off: zone1_pump
              - switch.is_off: zone2_pump
          then:
            - logger.log: "Keep Winter mode will stop"
            - delay: 15min
            - switch.turn_off: winter_mode
            - logger.log: "Keep Winter mode stopped"

  - id: zone1_pump_security
    mode: restart
    then:
      - logger.log: "Zone 1 security start"
      - delay: 60min
      - switch.turn_off: zone1_pump
      - logger.log: "Zone 1 security stop"

  - id: zone2_pump_security
    mode: restart
    then:
      - logger.log: "Zone 2 security start"
      - delay: 60min
      - switch.turn_off: zone2_pump
      - logger.log: "Zone 2 security stop"

switch:
  - platform: gpio
    pin: GPIO16
    name: "Winter mode"
    id: winter_mode
    inverted: true
    restore_mode: ALWAYS_OFF

  - platform: gpio
    pin: GPIO17
    name: "Zone 1 pump"
    id: zone1_pump
    inverted: true
    restore_mode: ALWAYS_OFF
    on_turn_on:
      then:
        - script.stop: keep_winter_mode_on
        - switch.turn_on: winter_mode
        - script.execute: zone1_pump_security
    on_turn_off:
      then:
        - script.stop: zone1_pump_security
        - script.execute: keep_winter_mode_on

  - platform: gpio
    pin: GPIO18
    name: "Zone 2 pump"
    id: zone2_pump
    inverted: true
    restore_mode: ALWAYS_OFF
    on_turn_on:
      then:
        - script.stop: keep_winter_mode_on
        - switch.turn_on: winter_mode
        - script.execute: zone2_pump_security
    on_turn_off:
      then:
        - script.stop: zone2_pump_security
        - script.execute: keep_winter_mode_on

Thermometers

To measure the temperature in the rooms, I used two Xiaomi Mi Home Bluetooth Thermometer 2 (~6 EUR each). They transmit the temperature via BLE (Bluetooth Low Energy) beacons.

Their LCD display is very convenient and, as they are battery powered, you can place them in the better part of the room. I flashed them with this custom firmware:

https://github.com/pvvx/ATC_MiThermometer

I’m still surprised by these small beasts, there are now firmwares to transform them in Zigbee:

https://devbis.github.io/telink-zigbee/.

Home Assistant

The control, reading the thermometers and activating the pumps, is done via a Home Assistant (HA) running in an old X86 tablet with Ubuntu (this is usually run in a Raspberry Pi or similar…).

I installed HA in a Docker container, this is my script to update and start the container:

#!/bin/bash
cd $(dirname $(readlink -f $0))
docker stop homeassistant
docker rm homeassistant
docker pull ghcr.io/home-assistant/home-assistant:stable
docker run -d \
	--name homeassistant \
	--privileged \
	--restart=unless-stopped \
	-e TZ=Europe/Madrid \
	-v ./config:/config \
	-v /etc/letsencrypt:/etc/letsencrypt \
	--network=host \
	ghcr.io/home-assistant/home-assistant:stable
docker image prune --all

Home assistant reads the thermometers via the Passive BLE monitor integration: https://github.com/custom-components/ble_monitor that can be easily installed via HACS (the Home Assistant Community Store). I needed a Bluetooth 5 USB adapter.

Then, I needed to setup two thermostats in HA via the config/configuration.yaml file:

climate:
  - platform: generic_thermostat
    name: "Living Room"
    unique_id: zone_1_thermostat
    heater: switch.zone_1_pump
    target_sensor: sensor.ble_temperature_living_room_thermometer
    min_temp: 15
    max_temp: 20
    ac_mode: false
    target_temp: 17
    cold_tolerance: 0.5
    hot_tolerance: 0
    min_cycle_duration:
      minutes: 30
    keep_alive:
      minutes: 5
    initial_hvac_mode: "off"
    away_temp: 15
    precision: 0.1

  - platform: generic_thermostat
    name: "Bedrooms"
    unique_id: zone_2_thermostat
    heater: switch.zone_2_pump
    target_sensor: sensor.ble_temperature_bedrooms_thermometer
    min_temp: 15
    max_temp: 20
    ac_mode: false
    target_temp: 17
    cold_tolerance: 0.5
    hot_tolerance: 0
    min_cycle_duration:
      minutes: 3
    keep_alive:
      minutes: 5
    initial_hvac_mode: "off"
    away_temp: 15
    precision: 0.1

Home Assistant doubles as temperature and humidity logger, and its easy to configure dashboards:

And, of course, now I’m using HA to control many other things at home.

Another complex parts were:

  • Making HA accessible via internet setting up a couple port redirections (one for HA and another for certbot) and a dynamic DNS service
  • Setup nginx and certbot for HTTPS
  • Connecting it to Google Home to allow receiving voice commands from my Nest Minis

But that is another long story…

The final installation if the ESP board, the relay module and the power adapter inside a box

Apple Vision Pro and why it is going to fail

After the initial hype (with Apple showing some nice fake videos…), it seems that now things are much more quiet about the Vision Pro. I’m sure they are going to fail, like all the previous attemps on VR “failed”, or at least they failed as a mass consumer product.

People do not want immersive experiences, or they do not want these experiences all the time. Books like “Neuromancer” or “Ready Player One” portray a society were all the computers use VR interfaces, but I think that is still not possible with the technology we have in 2024. Even if it becomes possible someday, people may not embrace it.

3D interfaces in computers are not comfortable, and all attempts to develop these kinds of interfaces have failed. Humans have been using paper, a “2D” medium, for thousands of years; that is why it’s so easy for us to interact with 2D screens on mobile phones or tablets.

We observed the evolution of past interface design trends, such as skeuomorphism, transforming into clean visual languages characterized by simplified interfaces, as exemplified by material design. The natural environment of these interfaces is 2D.

And additionally, there are other factors like the weight or the low resolution of the screens and the cameras (yes, it’s one of the biggest in the market, but it still cannot replace a real monitor, and it’s going to guarantee you dizziness when you try to read text).

There are still some niches where VR is a success, mainly those gaming related (and of course porn), but Apple is not good at any of those.

I remember some VR/AR memorable failures, the Vision Pro will soon join the list:


Slot Racing 10th anniversary

This year marks the 10th anniversary of my Slot Racing Android game. I had initially begun working on a 2.0 version, but I couldn’t find the time. A few months ago, I decided to merge the changes back into the previous app and prepare the Slot Racing 10th Anniversary edition:

In this version, I’ve removed the no-longer-functional Facebook integration and replaced it with device IDs for the leaderboard. Users can now choose any username for the times they send from their device.

One significant improvement is not immediately visible. I’ve upgraded from an old JMini3d (OpenGL 1.1) to the latest version using OpenGL 2.0. This enables the use of bitmap fonts allowing races with more than 5 laps (the old version only had textures for numbers 1 to 5).

Over the last 10 years, I’ve learned a lot about development. The code looked horrible for today’s standards, so I did a major code refactor. It now supports various race types, adding races against the chronometer.

A minor physics adjustment has been made to prevent users from taking very tight turns at full throttle, which has rendered all previous leaderboard times invalid. Additionally, the leaderboards now differentiate between each lane and the number of laps completed.

During last summer’s holidays, I added new circuits and seasons, including tracks from the current F1 season and Le Mans. Today, I’m applying the final touches to the Bluetooth code and I expect to release it soon.


OpenAI’s ChatGPT chatbot

OpenAI is a research institute that focuses on conducting research in the field of artificial intelligence. The organization was founded in 2015 with the goal of promoting and developing friendly AI, which refers to AI that is aligned with human values and that can be used to improve the lives of people. OpenAI conducts research in areas such as machine learning, robotics, and economics in order to advance the understanding and capabilities of AI. The organization is supported by a number of high-profile investors and has made significant contributions to the field of AI.

GPT is an acronym that stands for “Generative Pretrained Transformer.” It is a type of large language model that uses deep learning techniques to generate human-like text. It was developed by the research lab OpenAI, and is designed to be able to generate text that is indistinguishable from text written by a human. GPT models are trained on massive amounts of text data, and can generate responses to questions and prompts in a variety of languages and styles. They are often used for a wide range of applications, including language translation, text summarization, and conversation.

As a large language model trained by OpenAI, this chatbot is a type of artificial intelligence that is designed to generate natural language responses based on the input that it receives. It uses machine learning algorithms to process and analyze the input, and then generates a response based on the information that it has been trained on.

This chatbot does not have access to the internet, so it cannot browse or search for information online. Instead, it relies on the knowledge and information that it has been trained on to generate its responses. It is constantly learning and improving, so its responses may become more accurate and relevant over time.

Users can interact with this chatbot by typing in questions or statements, and the chatbot will generate a response in natural language. The specific capabilities and functionality of this chatbot will depend on its design and training.

And now the most interesting part: ALL THE ABOVE TEXT WAS GENERATED BY CHATGPT. It was done by asking it “What is OpenAI?”, “What is GPT?’ and “Tell me how this chatbot works in third person”. Surprised? Me too.

https://chat.openai.com/

Official announcement: https://openai.com/blog/chatgpt/