Did Elon Musk set up regulatory Boogeyman as a scapegoat for the continuing delay in the promise of autonomous Teslas?

A tweet from Elon Musk this recent Easter weekend has resulted in an old Easter egg hunt by media and industry over Tesla's autonomous car status.

Let's take a look at Tesla's ambitions for autonomous driving first, then move on to Musk's latest tweet.

Wind the clock about seven months or so.

During a Tesla results call in October of last year (this is 2019), Elon Musk said this about the FSD (Full Self-Driving) status invented by Tesla: "Good that it's going to be tight, it still looks like it will be at least in the limited early access version of full self-driving feature this year "(the reference to" this year "meant the year 2019, as the comment was made in 2019).

This covered promise did not seem to materialize in 2019.

There is also the constant lack of clarification on what a kind of "full functionality" of autonomous driving means because there is no clear definition of this rather vague and invented expression (it is not a recognizable vernacular or industrial language, certainly not codified by the standards of the autonomous automobile industry, therefore essentially a nebulous enigma of unspecified Musk phraseology).

The only piece of clue could be that Musk also suggested that the Tesla could be "capable of being self-contained but sometimes requiring supervision and intervention", as part of the notion of full functionality.

In the context of autonomous driving, it is important to realize that levels of autonomy officially defined by the SAE standard (Society of Automotive Engineers), if an autonomous car needs a human driver, the car autonomous is said to be semi-autonomous and not fully autonomous.

Given that Musk indicated that a Tesla embodying a "full" self-driving ability would also require that a human driver supervise and be ready to intervene, it would be better to say that the Tesla would be semi – autonomous so as not to be confused with the type of autonomous car which is truly autonomous.

This is why the use of the word "autonomous" is not in itself a good way of expressing things, because this then creates an ambiguity as to whether the vehicle will be semi-autonomous or fully autonomous.

Furthermore, in some respects, the use of the word autonomous alone can be misleading by implying that a car intends to be fully autonomous (which means that a human driver is not necessary at all, for whatever reason), since most people would infer that the word "autonomous" carries with it the notion of being completely autonomous.

It may sound like cutting your hair and cringing about self-use, but the wording predicts a huge difference (see my coverage on this, via the link here).

What would be clearer to all parties would be for Musk and Tesla to simply adopt the use of the levels of autonomy defined by the official SAE standard, which I will explain in a moment.

By the way, be aware that some critics of the SAE standard argue that it should be replaced by an even more definitive set of levels, but, despite this desire to be more stringent, the existing standard nevertheless provides a a means of generally agreeing on what constitutes (for now) a measurable set of levels of autonomy. So anyone who might try to use this criticism to completely escape the reference to these existing levels is playing a kind of game, as it were, using standard critics as a type of false protective shield to wave their hands in the air. .

Some argue that Musk and Tesla's ambiguous formulation about the planned capabilities of the FSD is neither accidental nor incidental. Rather, the belief is that the wording is deliberate and intentionally intended to be blurry, allowing the business not to be determined by details.

In a sense, this could be likened to political discourse that embraces broad platitudes and avoids any brass punishment, offering maximum flexibility and what some call the wink of plausible denial.

Anyway, following Musk's statement at this teleconference, he later said that rather than the earlier prediction of things that would happen in late 2019 (which, yes, was covered by the fact that he had said "he still appears" in his initial request), instead, the new target would be sometime in 2020.

Supporters would likely say that it is straightforward and straightforward and that it is obviously difficult to predict when this extremely complex software can be prepared, especially for something as deadly and monumental as being able to help to drive a car.

Others might counter with the point that it is a shell game of moving around in the peanut to distract, and perhaps it there is no peanut sitting under the shells, or that the real date may be in 2021 or 2022, but to pacify and keep people hastened to say that the drip to make a date and then move the date is teased.

Musk's latest thoughts on Tesla FSD

Fast forward to the Easter weekend of April 2020.

In response to a tweeted question about the latest FSD status for Tesla, Musk replied with this tweet: “The functionality still looks good for this year. Regulatory approval is the big unknown. "

For those who read tea leaves, there is once again a hedge in the "beautiful" part of his remark.

A good appearance could mean that things will happen by "this year" or could be interpreted as suggesting that right now things seem to be on the right track, but that it suggests that 39; there is a chance that this will not happen quickly, and therefore later this year it will be easily possible to say that things have taken a turn and the date is pushed back, possibly becoming be an undetermined date for 2021.

There is also a new ambiguity as to what will even be delivered per se, as the word "functionality" has little definitive meaning and brings things back into the doldrums of what full functionality and FSD actually consist of.

Presumably, it might be possible to present something that will be labeled as "functionality" during this year, thus achieving the suggested promise, and yet the functionality could be far from and much less than what seems to be implied as self. driving.

This is wording that has plenty of room for ambiguity and loose interpretation.

Supporters would likely applaud his willingness to share the latest status and point out that there is little to say in a short tweet. Don't be so finicky and peck at the text, they might insist.

Many in the media certainly seemed to be looking beyond these semantic hurdles and proclaiming that Musk has indicated that the Tesla autonomous car fleet may be ready by the end of this year.

Could, could, could, say some critics.

In any case, there is a subtle aspect to the tweet that few may give due credit to.

Like driving on a long stretch of open highway and seeing a distant object that is not yet developed, the tweet contains a reference to regulatory approval, making it almost as an aside.

As part of the ongoing rendition on Tesla and Musk, be aware that there have been numerous occasions where the firm and its CEO have made various comments on potential regulatory aspects, including the idea that regulations are likely to intrude on Tesla's self-driving ambitions (and, in theory, the rest of the autonomous car industry).

Perhaps it is time to take a closer look at this distant thing called regulatory approval, as it particularly concerns Tesla, and to see if, through the use of a decompression telescope, we can bring the subject closer.

Before doing so, let's take a moment to clarify the levels of autonomous cars and their range.

Autonomous car levels

Real autonomous cars are those that AI drives the car entirely on its own and there is no human assistance during the driving task.

These driverless vehicles are considered a level 4 and a level 5, while a car that requires a human driver to share the driving effort is generally considered to be a level 2 or 3. Cars that share The driving task are described as follows: being semi-autonomous, and generally contain a variety of automated add-ons which are called ADAS (Advanced Driver-Assistance Systems).

There is no real level 5 autonomous car yet, which we don't even know if this will be possible, nor how long it will take to get there.

Meanwhile, level 4 efforts are gradually trying to gain some traction by undergoing very narrow and selective street tests, although there is controversy as to whether these tests should be authorized in themselves (we are all guinea pigs for life or death in an experiment that take place on our highways and secondary roads, some point out).

Since semi-autonomous cars require a human driver, the adoption of these types of cars will not be much different from driving conventional vehicles, so there is not much in and of itself to cover on this (however, as you will see in a moment, the following points are generally applicable).

For semi-autonomous cars, it is important that the public should be warned of a disturbing aspect that has emerged recently, namely that despite these human drivers who continue to post videos of themselves Asleep at the wheel of a level 2 or level 3 car, we must all avoid being misled into believing that the driver can divert their attention from the driving task while driving a car semi-autonomous.

You are the party responsible for driving actions of the vehicle, regardless of the amount of automation that can be launched in a level 2 or 3.

Autonomous cars and Musk on regulatory approvals

For real level 4 and level 5 autonomous vehicles, there will be no human driver involved in the driving task.

All occupants will be passengers.

AI does the driving.

The existing Tesla are neither level 4 nor level 5.

Most would rate them at level 2 today.

What difference does it make?

Well, if you have a real autonomous car (level 4 and level 5), which is driven only by AI, there is no need for a human driver and in fact no Interaction between AI and a human driver.

For a level 2 car, the human driver is always in control.

In addition, the human driver is considered to be the party responsible for driving this car.

The twist that will bother everyone is that AI may appear to be able to drive the level 2 car, when this is not possible, and therefore the human driver should always be alert and act like if he was driving the car.

With that as a crucial backdrop, given the vagueness of what Tesla will deliver if a human driver is still needed, it means that self-driving is going to be either an upgraded version of level 2 or maybe a level 3.

But, certainly not a level 4 or level 5, assuming that Tesla's ability will require the presence of a human driver behind the wheel.

Anyway, bring your attention to the other element of this discussion, the aspect of regulatory approval.

Revisit Musk's recent tweet, which said, “The functionality is still as good this year. Regulatory approval is the big unknown. "

Since the tweet was sent over the Easter weekend, we should probably be ready to go down to the rabbit hole to continue its meaning.

Keep in mind that we do this with this handicap:

  1. The nature of the so-called functionality is complete and the FSD is nebulous
  2. The proposed date or the estimated delivery date is nebulous
  3. The aspects of "regulatory approval" are also unclear

It’s a real cloudiness trifecta, or maybe even cloudiness in cubes.

In any case, here is what some argue.

We generally perceive government and regulations as an obstacle to progress and innovation (I'm not saying it is necessarily true, but only that it is generally perceived as such).

This somewhat anti-regulatory view is particularly exposed or at least praised by a non-conformist society like Tesla and a non-conformist person like Musk, and apparently also adopted by non-conformist supporters who like this non-conformist flavor passionate.

While using a regulatory boogieman can be a smart underlying part of a strategy that assumes that if the technical elements continue to be difficult to achieve, it will be "easy" to pretend that regulation is it causing delays and not the actual technical roadblocks instead?

Basically, put a scapegoat, a step still invoked, but sit ready, and when or if the time is right, put it in the world sphere for all its value.

Certainly, this has a certain meaning.

How can an entity or a person continue to push back delivery dates and do so without ultimately being hard-pressed by the continuous series of delays?

Well, it's easy, just blame the question on these bureaucratic paper pushers.

The beauty of such an excuse is that it seems to resonate in many ways, namely that the public is already predisposed that regulations often stifle and that it seems that many "news" innovations were born only by disregard for existing laws and regulations (for example, some would vehemently say that Uber and Lyft did just that, bypassing existing rules that daily taxi and taxi services had to have endure).

So by putting the regulatory battle cry in your pocket and peddling it from time to time, the scene is ready to summon the boogieman, if necessary, if necessary.

If that day arrives, you can pull the cord, make the parachute bloom with regulatory outrage, as well as trigger rabid fans to bellow "with man", buy the space to breathe and create a screen of smoke of grace while hiding behind the agitated indignation.

Of course, it may be that the scapegoat is not otherwise necessary and therefore can never be invoked.

Or, the scapegoat might be necessary, and just like a “glass breaking” fire alarm, at the right time and in the right place, regulatory shame can begin.

There is also the other side of this room.

It might be true that the regulatory aspects could trigger things, in which case the whole subtle theme is still an undercurrent worthy of staying warm and ready to go.

Here is part of the rub about it:

· What do Tesla and Musk specifically believe to be a regulatory barrier or hurdle for their FSD or full functionality or anything "?

He doesn't seem to have specific details yet.

Logically also consider this problem:

· If they are able to predict that there are regulatory issues, what are they doing right now to anticipate these facets?

In other words, it doesn't seem commercially prudent unless you know regulatory scruples that you wouldn't be preparing them, either by preparing your technology to deal with it, or somehow designed to meet regulatory requirements or by hand. working with regulators to see if there are ways to adjust or modify the concerns raised.

And like a friction hunter:

· Why not lay out directly for everyone to see what the gaps are between what the regulations require and what the expected technology will deliver?

It seems like it would be pragmatic and a quick way to overcome an outstanding difficulty or delay by being open and proactive, rather than maybe waiting for things to be "prepared", then suddenly proclaiming, oops, cannot go ahead due to a regulatory mandate.

Indeed, here is what Musk said in July 2017: “AI is a rare case where we have to be proactive in regulation rather than reactive. Because I think by the time we react to the regulation of AI, it is too late. "

Of course, this makes a lot of sense.

It also raises the specter that regulations almost always appear to be corrupt, but it would appear that Musk is in fact noting that regulations could very well be justified, at least in relation to AI-based systems (which would include autonomous cars), by the way).

Perhaps there is a solid basis for the necessary regulatory approval which, according to Musk's recent tweet, will either be a complication or worse will pose an obstacle as an undue burden.

This brings us further to Musk's own words on the subject of the need for AI-related regulations. In a February 17, 2020 tweet, Musk said, "All organizations developing advanced AI should be regulated, including Tesla."

If this tweet is taken literally, it would seem that instead of worrying about regulatory approval of Tesla's capabilities, it is essentially on the wishlist for something that needs to be done and Well Named.

Now you can certainly still argue whether the regulations themselves are on target or not, but this is an argument that should already be taking place and especially in a public way, due to the life and death issues involving the & rsquo; Self-driving cars, both semi-autonomous and fully autonomous.

Many efforts are underway at this time regarding the regulatory aspects associated with autonomous cars, including at the federal, state, municipal and also numerous industrial bodies which are strengthening standards which will inevitably be incorporated into some form of regulation.

That being the case, what beef owns Tesla or Musk, which would be useful to know now, and could therefore either seek to shape or reshape these efforts.

Is it something technical?

Or, it is that they believe it will take too long to undertake, in which case, perhaps suggest how the process should be streamlined, while hopefully retaining the safety and necessary care expected by the public.


Supporters of Tesla and Musk would likely say that regulatory approval aspects are something that should not hinder the achievement of Tesla's autonomous ambitions.

Okay, if that is the case, it would seem useful if a fully demarcated list of either unnecessary regulations was identified, or if the regulations were perhaps too inflated or superfluous and will therefore block or hinder progress , explain what it is.

In July 2017, Musk said: "I have very advanced AI exposure, and I think people should really worry about it."

Furthermore, his remarks from July 2017 included this: "I keep sounding the alarm, but until people see robots walking down the street killing people, they don't know how to react, as it seems so ethereal."

Parlay of its declared scruples, there are critics who worry that we are heading towards autonomous cars that will go down the streets and highways, risking falling into car accidents, killing people, and therefore the strange predictive nature of Musk's words heralds such potential.

Well-thought-out regulations and their enforcement might help to alleviate or somewhat alleviate Musk's sounding alarm bells, but you cannot allude in the same breath to regulations as a party crusher that seems to undermine things, at least not if you don't directly state how that seems to be the case and taking proactive action accordingly.

I am not making an argument here per se on the nature or necessity of regulations (this is an ongoing debate as a whole), and I simply point out that regulations are a consideration for them autonomous cars, for society, for our well-being, etc. .

In the case of AI, some believe that we have not adjusted existing laws enough to allow the breakthroughs that AI brings to our daily lives, while others sometimes claim that existing laws will stifle the adoption of AI (for more on these meaty issues, see my FutureLaw 2020 coverage at this link here).

Whichever side you fall into, it's time to get into the game and start participating in discussions about AI and the laws, doing it now, and helping to guide the game. 39 the future of AI in our society.

There are many unanswered questions, including those above regarding real or imagined hobgobelins.