

But the big one here is the characteristic word. By adding Fenyx Rising, it could be argued that that, in addition to the material differences between the products, there is enough separation to ensure there is no risk of confusion from audiences. There are also multiple Immortals trademarks which could make that word in and of itself less defensible depending on the potential conflict.
That’s basically it right there. The word “immortal” has multiple dictionary definitions tracing back long before any trademark, including a prominent ancient military unit so any trademark around that word isn’t strong enough to prevent any use of the word as a normal word, or even as part of another trademark when used descriptively.
The strongest trademark protection comes for words that are totally made up for the purpose of the product or company. Something like Hulu or Kodak.
Next up are probably mashed up words that might relate to existing words but are distinct mashups or modifications, like GeForce or Craisins.
Next up, words that have meaning but are completely unrelated to the product itself, like Apple (computers) and Snickers (the candy bar) or Tide (the laundry detergent).
Next up are suggestive marks where the trademark relies on the meaning to convey something about the product itself, but still retains some distinctiveness: InSinkErator is a brand of in-sink disposal, Coffee Mate is a non-dairy creamer designed for mixing into coffee, Joy-Con is a controller designed to evoke joy, etc.
Some descriptive words don’t get trademark protection until they enter the public consciousness as a distinct indicator of its origin or manufacture. Name-based businesses often fall into this category, like a restaurant named after the owner, and don’t get protection until it’s popular enough (McDonald’s is the main example).
It can get complicated, but the basic principle underlying all of it is that if you choose a less unique word as the name of your trademark, you’ll get less protection against others using it.
Can humans actually do it, though? Are humans actually capable of driving a car reasonably well using only visual data, or are we actually using an entire suite of sensors in our heads and bodies to understand our speed and orientation, road conditions, and our surroundings? Driving a car by video link is considerably harder than just driving a car normally, from within a car.
And even so, computers have a long way to go before they catch up with our visual processing. Our visual cortex does a lot of error correction of visual data, using proprioceptive sensors in our heads that silently and seamlessly delete the visual smudges and smears of motion as our heads move. The error correction adjusts quickly to recalibrate things when looking at stuff under water or anything with a different refractive index, or when looking at reflections in a mirror.
And we maintain that flow of visual data by correcting for motion and stabilizing the movement of our eyes to compensate from external motion. Maybe not as good as chickens, but we’re pretty good at it. We recognize faulty sensor data and correct for it by moving our heads around obstructions, of silently ignoring something that is just blocking one eye, of blinking or rubbing our eyes when tears or water make it hard to focus. We also know when to not trust our eyes (in the dark, in fog, when temporarily blinded by lights), and fall back to other methods of understand the world around us.
Throw in our sense of balance in our inner ears, our ability to direction find on sounds, and the ability to process vibrations in our seat and tactile feedback on a steering wheel, the proprioception of feeling forces on our body or specific limbs, and we have an entire system that uses much more than visual data to make decisions and model the world around us.
There’s no reason why an artificial system needs to use exactly the same type of sensors as humans or other mammals do. And we have preexisting models and memories of what is or was around us, like when we walk around our own homes in the dark. But my point is that we rely on much more than our eyes, processed through an image processing system far more complex than the current state of AI vision. Why hold back on using as much sensor data as possible, to build a system that has good, reliable sensor data of what is on the road?