apple


Abusing Apple’s Find My network

Some months ago Apple was running a commercial on Spanish TV about the iPhone’s privacy, mocking users with Android devices.

Probably, iPhone users do not know that it is really easy to physically track them.

Apple’s “Find My” network was launched in 2019 and the AirTag in 2021. The AirTag emits BLE beacons with a public key that, when received by another Apple device, are sent along with the location of the device that received them to Apple servers, encrypted with the AirTag public key. The information on the Apple servers needs to be decrypted with the AirTag’s private key.

However, when the AirTag location is updated, the owner of the AirTag also knows that there is an Apple device nearby.

In theory it’s required to have an iPhone/iPad/iPod to activate an AirTag, but it’s very easy to build an AirTag clone with ESP32.

The Macless Haystack project, based on the original OpenHaystack, provides

  • The python utility to generate AirTag key pairs
  • The firmware to flash an ESP32-WROOM-32 and convert it to an AirTag clone
  • An Android app to check the location of your fake AirTags
  • Two Docker containers needed to retrieve the info from the Apple servers, they need to be accesible by the Android app

And, of course, I built my own one:

These ESP clones are much bigger than AirTags but they are ok to track cars, suitcases or bags. There is a PR to the Macless Haystack repo alowing to use an ESP32-C3 Supermini, that would make a smaller device.

Google launched the “Find My Device” network in April 2024, and the tags supporting it (i.e. the Chippolo Point or the Pebblebee) are slowly reaching the market. But the default security option only shares the location information if there are other Android devices nearby. That is much better for the privacy of Android device owners, but much worse for the owners of the tags.

So, if you own an Android device, it is still better to use AirTag clones, abusing the lack of privacy of the Apple devices.


Apple Vision Pro and why it is going to fail

After the initial hype (with Apple showing some nice fake videos…), it seems that now things are much more quiet about the Vision Pro. I’m sure they are going to fail, like all the previous attemps on VR “failed”, or at least they failed as a mass consumer product.

People do not want immersive experiences, or they do not want these experiences all the time. Books like “Neuromancer” or “Ready Player One” portray a society were all the computers use VR interfaces, but I think that is still not possible with the technology we have in 2024. Even if it becomes possible someday, people may not embrace it.

3D interfaces in computers are not comfortable, and all attempts to develop these kinds of interfaces have failed. Humans have been using paper, a “2D” medium, for thousands of years; that is why it’s so easy for us to interact with 2D screens on mobile phones or tablets.

We observed the evolution of past interface design trends, such as skeuomorphism, transforming into clean visual languages characterized by simplified interfaces, as exemplified by material design. The natural environment of these interfaces is 2D.

And additionally, there are other factors like the weight or the low resolution of the screens and the cameras (yes, it’s one of the biggest in the market, but it still cannot replace a real monitor, and it’s going to guarantee you dizziness when you try to read text).

There are still some niches where VR is a success, mainly those gaming related (and of course porn), but Apple is not good at any of those.

I remember some VR/AR memorable failures, the Vision Pro will soon join the list: