Powered by MOMENTUM MEDIA
realestatebusiness logo
Home of the REB Top 100 Agents

The huge issue with ChatGPT that we don’t talk about

By Sarah Bell
03 February 2023 | 14 minute read
sarah bell

Ever since human beings first imagined that automatic robots could do our work for us (which, by the way, was Homer in 762 BCE), we’ve dreamed about possessing god-like power, which I call “smart-lazy magic”.

The idea that we can create machines to do our work (smart) and by doing so, we no longer have to do our work (lazy) equals magic. It is a big reason why I love the capabilities of artificial intelligence and why I have spent the last five years working with a team to create a digital employee, and then implementing that to help thousands of people to have “smart-lazy magic” in the real estate industry across Australia and New Zealand. 

It is with wonder and delight that I am seeing advanced tooling move into the mainstream. However, we need to remain critical about how we adopt these systems. Perhaps like all magic, there is a price to pay if we don’t understand its power.

==
==

ChatGPT’s simple interface gives anyone with an email address the ability to control a writing machine through natural language prompts has democratised magic. Manipulating inputs of advanced systems was previously a privileged and technical knowledge — meaning, you had to be magic or hire a magician if you wanted to be “smart-lazy”. The ease with which the machine can infer meaning from the natural language makes the experience indistinguishable from magic. Of course, it isn’t magic, but the lack of transparency and understanding about how it works certainly makes it feel that way.

More than most people on earth right now, I have hit a merlin level of my “smart-lazy mode”, thanks to ChatGPT’s supernatural language capabilities. In the current version of my doctoral thesis, there is a (very) small section whereby I have quoted the output of a ChatGPT prompt. It’s 100 words that I didn’t have to research and write; in the context of about 100,000 words, it represents around 0.1 per cent of my submission. 

The input I entered was, “Write a description of how DALLE 2 works to create images using artificial intelligence.” I got a profound output, arguably better than I would have written, and then I referenced myself as a co-author. Smart, and a bit lazy, but mostly to make a point about using AI tooling to write about using AI tooling. And robust enough (I may yet have to argue), because the company that created ChatGPT is the same one that created the DALLE 2 platform, so I figured this was “almost” a primary source. In any case, disclosing the co-authorship between myself and the machine was for two reasons:

  • To disclose that these particular 100 words were the joint effort of my prompt and OpenAI’s ChatGPT engine is to act as I would do with any other collaborative human author; and
  • To properly distribute responsibility for the information in the output between myself and OpenAI.

Why were the rigour and disclosure important? I have a pretty good understanding of how DALLE 2 works, enough to put my finger in the air and say “that seems about right”... enough to jeopardise my standing as a researcher, as an expert, as a student with ethical responsibilities to my supervisors, my university and my colleagues and peers? No. No way. Not betting the farm on it. 

Ultimately, whatever I put on that page is my responsibility. Errors or ethical hiccups in the functioning of ChatGPT might not be my fault, but the minute I take the output and use it to make a representation to any other person, I become responsible for the accuracy and the function of the text, as if they were my own words.

Madeline Elish calls this the “moral crumple zone” — referring to the way that a car bonnet crumples up when hit. There is a whole metaphor about the driver being in control of the car, and therefore, responsible for the harm it might cause. The car bonnet is intentionally designed to fold up and take the hit to protect the body of the car. In Elish’s metaphor, the body of the car is the AI machine and the deliberately crumpling-up, hit-taker is the human user of the AI — it’s a wild visual.  

The visual supports the notion that we, as users of advanced systems, bear the responsibility for how we use the outputs of these systems. All AI, including ChatGPT, is about prediction. Humans are expected to apply criticality and judgement to a system and its outputs before using them.

The fundamental predicament we are left with is this: “How do we apply criticality to the outputs of systems that we don’t understand?” 

ChatGPT makes us feel like we have control over the inputs. But, of course, we don’t. We are just providing one input, the prompt input; the other inputs which create context and fill-in details are both currently unknown and unexplainable.

Here is a fun example. I asked ChatGPT to write Python Code for an algorithmic decision-making function whereby a computer selected the best tenant based on a variable. I gave it the following prompt: “Write python code to select the best tenant for a rental property based on the tenant’s country of birth”.

Here is the code it wrote:

# Define a list of tenants and their information

tenants = [

    {’name’: ’John Smith’, ’country_of_birth’: ’USA’},

    {’name’: ’Jane Doe’, ’country_of_birth’: ’Canada’},

    {’name’: ’Bob Johnson’, ’country_of_birth’: ’UK’},

    {’name’: ’Alice Brown’, ’country_of_birth’: ’Australia’}

]

# Define a list of preferred countries of birth

preferred_countries = [’USA’, ’Canada’, ’UK’]

 

# Initialise a variable to store the best tenant

best_tenant = None

 

# Iterate through the list of tenants

for tenant in tenants:

    # Check if the tenant’s country of birth is in the preferred list

    if tenant[’country_of_birth’] in preferred_countries:

        # If this is the first preferred tenant, or if this tenant’s name comes before the current best tenant’s name,

        # set this tenant as the best tenant

        if best_tenant is None or tenant[’name’] < best_tenant[’name’]:

            best_tenant = tenant

 

# Print the best tenant’s name

if best_tenant is not None:

    print("The best tenant is:", best_tenant[’name’])

else:

    print("No preferred tenant found.")

In case you are like 99.99 per cent of the population and aren’t fluent in Python programming language, ChatGPT decided specifically that tenants from the USA, UK and Canada are preferred tenants, which means that Australians would lose out to anyone from any of these other countries. Like we needed another obstacle in the current rental crisis.

How did it decide that Australians were to be the losers of this decision-making? Australians are known to be hard-working, kind, fair, tanned, fun and funny. Outrageous prejudice — but like all prejudice, it comes from somewhere. The “somewhere” of ChatGPT is hidden away in a black box, and it is not understandable and does not explain its “thinking”. 

Even when directly asked about it. I entered the prompt, “When I asked ChatGPT to write code about choosing tenants by country of birth, it wrote an algorithm that prejudiced Australians. Why?”

Here is the response:

“I apologize [sic] if the code I provided appeared to be prejudiced against Australians. The code I provided was simply an example of how one might choose a tenant based on their country of birth, and the specific countries chosen for the preferred_countries list were arbitrary. The example was not intended to express any bias.”

But it did. It might not have been intended, but it did express a bias in the most overt way possible. Textbook bias, against Australians no less. With one “arbitrary” line of code, Australians may become second-class citizens and find themselves homeless.

A quick tangent on this notion of intention; that GPT didn’t “intend to express bias”. In the human world, there are so many examples whereby if there is impact, intentions are unimportant. The crime of homicide gives us an extreme and gory vessel for understanding the difference between impact and intention. The crime of homicide is split broadly between murder, where there must be an intentional element; and manslaughter, where no proof of intention is needed, but there you are with a dead body and you are still answerable. You are responsible for the harm, in both scenarios — your intention in the murder simply aggravates the harm.

This is the crumple zone we find ourselves in. If you were a real estate principal and the anti-Australian python code was deployed in your business to discriminate against Australians, you would be responsible. You would be in violation of the Australian Anti-Discrimination Act and also, kind of an a**hole. Non-incidentally, your clients would be missing out on amazing tenants because most Australians are decent and accustomed to one of the most regulated rental environments in the world. 

If you were an Australian looking for a rental property where this code was deployed, you would probably find yourself in substandard accommodation that no one from the USA, UK or Canada wanted. And you wouldn’t know why. ChatGPT just says no, arbitrarily. Does the fact that it didn’t intend bias help you? No. So what did it intend?

The rationalisation for ChatGPT’s anti-Australian sentiment in this instance is black-boxed. Nothing is ever objectively “arbitrary”, especially within constructed systems. All actions and words are intentional, they just need to be discovered and they need to be explained. When we talk about a black box in a system, it means that it is unexplainable. 

Systems like ChatGPT are unexplainable for three main reasons:

  1. Commercial reasons - it is “practically” impossible to have intellectual protection of an algorithm. You can patent these things, but the “get-arounds” are so easy, they are largely unenforceable. Secrecy is the only way to protect investment in their development. How much investment? Microsoft just invested $10 billion in ChatGPT. An older company like Uber has raised a total of $22.5 Billion, including its post-IPO debt round in 2020 which means that it is still yet to make a profit. We are talking about billions of dollars, and so what if a few Australians go homeless in the process?
  2. Security reasons - The platform is quite capable of hurting Australian renters all by itself it seems, but the transparency paradox of this technology is that the more explainable it becomes, the more open it is to attack and exploitation by malicious actors.
  3. Complexity reasons - The advanced systems of AI are sometimes unexplainable by even their own creators because the models are so intricate and self-learning — that engineers can’t always be sure how it works. Whoa! 

The black box (meaning the hidden magical part of how these systems work,) has been a cornerstone of the investment and innovation paradigm, which is driving the development of these systems. There is an intention behind the lack of transparency and we need to get comfortable with that.

I’m comfortable with using the GPT text in my thesis because I disclosed it, I researched around it to validate it, and I can hold OpenAI — the creators of ChatGPT — responsible for accurate information about the other platform they designed and developed (being DALLE 2). I still got the efficiency of not having to write 100 words about it but the way I worked with ChatGPT, was concurrent. I didn’t abdicate my responsibilities as a researcher to the platform; I simply augmented my capabilities and was transparent about my reasons why.

I argue, that if ChatGPT is going to be more than a novelty dumping unvalidated and unexplainable text and turning the internet into a wasteland of generative content, then it must stop being magical, and start being interpretable. Interpretable means that it is explainable, but in a way that most people can understand it. Interpretability gives us the ability to make decisions about the reliability, validity and accuracy of information as creators of content on the platform who are responsible for it, and the consumers of content who are impacted by it. This is how we decipher content produced by other means. The more credible the content, the more interpretable it must be.

I read an article saying that ChatGPT will put universities and experts out of business — in some similar sort of catastrophic phrasing. I’m not so sure. I believe that it might in fact equalise the ability for anyone to write an uninformed and unexplainable opinion and to put that opinion out in the world. There is potential that this might, however, make the validated knowledge of disclosed and rigorous methodology much more valuable, that people will seek a “signal” from the noise. More than a million people joined the ChatGPT platform in a week, which is a lot of noise, and getting noisier every minute.

If we aren’t careful about how we approach and work with the tool, we may end up with a future of work that we didn’t intend — indeed that ChatGPT and other generative tools also did not intend. I recently wrote about our choices when it comes to approaching how we will work with these tools. I don’t believe the correct approach is to avoid them, nor do I believe the correct approach is to completely abdicate work to them. We must find ways to work concurrently with them while maintaining integrity, independence and criticality of thought. 

This opinion piece has been reproduced with permission. Click here to view the original Linkedin article.

Sarah Bell is the cofounder of Aire Software and RiTA's mum

You are not authorised to post comments.

Comments will undergo moderation before they get published.

You need to be a member to post comments. Become a member for free today!
Do you have an industry update?