Conversational Programming
Parrots, Pandas, and Pythons.

It’s always good to have a knowledgeable human at hand to ask all sorts of programming questions. Why does this weird bug happen? Do you have any idea why computer says no? The human does not even have to be able to answer that particular question, they just need to sit there and listen. Enunciating questions always helps - suddenly, knots in the brain untie, solutions appear, the bug is found. In other words, that’s human-based rubber duck-debugging.

Alas, humans are not always at hand. My computer has not yet replied to conversational attempts.

There is a small rubber duck on my desk. It was given to me by a dear friend and has accompanied me through many exams. It’s great for debugging, but can only squeak.

As with every hype, I have been skeptical towards the AI craze. It’s a stochastic parrot, but yes, a good one. However, I do see its benefits and how it can change our work for the better. And with programming, AI finally made it conversational.

Parrots and PowerShell

At work, I had to write my first PowerShell scripts, as I have previously ducked out in favor of Linux until now. Where to start? I had a clear problem to solve, but only foggy notions about PowerShell. So I asked questions. ChatGPT, how do I solve X? Show me how this cmdlet is used. How do I nicely print data on screen? All these basic questions which would have cost me so much googling and deep-diving into PowerShell’s documentation.

Aware that I was talking to a stochastic parrot, not a duck, I split up the task in very small subtasks and thus sub-questions, and slowly progressed in my conversation with ChatGPT to solve each. I checked every reply and cross-checked with the documentation, but given the AI’s answer, I at least knew where to look. Given my programming experience, I could also determine when the AI was clearly wrong and find my own solution.

ChatGPT served as my parrot-duck, the virtual assistant who never tires of my questions, who cheerily provides (wrong) answers. I remember my early programming days when I despaired: Back then, no-one could answer why my beginner-Java code broke and could not exponentiate my numbers. I was young, there was no Google and no knowledgeable human around. No ChatGPT. For such a beginner problem, the AI could have provided a correct answer in seconds and even be the go-to tech-human I missed for such a long time.

Pandas and Python

To introduce another species to the zoo, I recently met Pandas for the first time. Not the animal, the Python library for data science. We have a manual workflow at my job which I was desperate to automate, and Pandas was the perfect library for some data crunching. Also, I finally wanted to do a deep-dive and learn more about the Excel replacement (guess what I love more). Pandas is a big one. A Django-level big package of a field that I’m not too familiar with - but I wanted to learn more.

I quickly got lost in the documentation and did not understand some existing code written by a co-worker.

So I asked.

ChatGPT answered, as always cheerily and confidently. Based on my experience, I rectified the parrot, refined my questions and asked again and again, slowly gaining a better feeling for the library and a deeper understanding how its concepts work. ChatGPT now thinks I sell fruit based on my made-up context, and some answers eerily parroted content from StackOverflow. Nevertheless, I persisted and have now a nice script.

I love the process. It feels like a human is patiently replying to all my questions, never tires, never ridicules my ignorance. Besides generating illustrations for blog entries, this is the second AI use case that I found applicable to my daily work - given you know how to spot and sort out the bad, off-track answers. And not forget about the inherent bias in the data .

ChatGPT reminds me of some encounters in China: If you ask for directions, you will always have a finger pointed to go somewhere. There is always an answer. The person saved their face by providing it - if it’s right or wrong is up to you to decide, and you will realize it after you’ve wandered in the wrong direction for an hour. We should treat the stochastic parrots the same: Before wandering off, cross-reference the answer with your map.

The term stochastic parrot was coined by Prof. Emily Bender. ChatGPT says it was Alex Irpan. Guess who is wrong and what it says about inherent data (and gender) bias.

Image source: “a python, a parrot and a panda meet in a bar and have a good conversation”, Nightcafe/SDXL. The AI thinks pythons are also pandas, just with scales?


Last modified on 2023-11-18

Comments Disabled.