> Explainable AI (XAI), or Interpretable AI, or Explainable Machine Learning (XML),[1] is artificial intelligence (AI) in which humans can understand the decisions or predictions made by the AI.[2] It contrasts with the "black box" concept in machine learning where even its designers cannot explain why an AI arrived at a specific decision.[3][4] By refining the mental models of users of AI-powered systems and dismantling their misconceptions, XAI promises to help users perform more effectively.[5] XAI may be an implementation of the social right to explanation.[6] XAI is relevant even if there is no legal right or regulatory requirement. For example, XAI can improve the user experience of a product or service by helping end users trust that the AI is making good decisions. This way the aim of XAI is to explain what has been done, what is done right now, what will be done next and unveil the information the actions are based on.[7] These characteristics make it possible (i) to confirm existing knowledge (ii) to challenge existing knowledge and (iii) to generate new assumptions. [8]
https://en.wikipedia.org/wiki/Explainable_artificial_intelligence

- How does [ChatGPT] work?

- Can ChatGPT explain the criteria it uses to filter candidate responses (out of the Powerset of all possible responses)?

- What sorts of rejectable hypotheses and experimental designs is ChatGPT capable of synthesizing?

...

From https://en.wikipedia.org/wiki/Prompt_engineering :

> Prompts that include a train of thought in few-shot learning examples show better indication of reasoning in language models. [7] In zero-shot learning prepending text to the prompt that encourages a chain of thought (e.g. "Let's think step by step") may improve the performance of a language model in multi-step reasoning problems.


From https://github.com/oblivia-simplex/ghidra_tools/blob/ac1fdf33d313314cde0eef0f1002a66dd95d59b4/g3po/g3po.py#L165-L179 :

``````python
def generate_comment(c_code, temperature=0.19, program_info=None, prompt=None, model=MODEL, max_tokens=MAXTOKENS):
    intro = "Below is some C code that Ghidra decompiled from a binary that I'm trying to reverse engineer."
    #program_info = get_program_info()
    #if program_info:
    # intro = intro.replace("a binary", f'a {program_info["language_id"]} binary')
    if prompt is None:
        prompt = """{intro}

```
{c_code}
```

Please provide a detailed explanation of what this code does, in {style}, that might be useful to a reverse engineer. Explain your reasoning as much as possible. Finally, suggest a suitable name for this function and for each variable bearing a default name, offer a more informative name, if the purpose of that variable is unambiguous. {extra}

""".format(intro=intro, c_code=c_code, style=LANGUAGE, extra=EXTRA)
``````

ChatGPT can help explain code and codebases, but can it explain each response?

Similarly, "Is there any way to get the step-by-step solution in SymPy?" [like e.g.  paid WolframAlpha and PhotoMath do]
https://stackoverflow.com/questions/39359220/is-there-any-way-to-get-the-step-by-step-solution-in-sympy



On Wed, Jan 4, 2023, 5:13 PM Wes Turner <wes.turner@gmail.com> wrote:
Should we expect the output of [ChatGPT] to be stable or deterministic given the same prompt? 

- Does [ChatGPT] "converge" on the same solutions, given the same inputs? Where is there additional entropy in the algorithm or implementation?
  - random seed(s)
  - Hash randomization
  - distributed system failure

- "Which data series predict recession, and with what confidence?"
  - Known good: Bond Yield-Curve Inversion

- "Which economic interventions are appropriate for the current conditions?"

#EvidenceBasedPolicy

On Wed, Jan 4, 2023, 4:34 PM Wes Turner <wes.turner@gmail.com> wrote:
What are the expected limitations of [ChatGPT]?

What is "Prompt Engineering"?
[Prompt engineering - Wikipedia]( https://en.wikipedia.org/wiki/Prompt_engineering )

What lessons about technology reliance could you teach, in regards to Clippy?

- "What is ChatGPT? Wrong answers only"
  - Human_n: EDGES WITH REASONING

- "Tell me IDK ("I don't know") when you don't know"

- "How certain are you that that is the correct answer?"

- "Are static analysis code metrics sufficient for Safety Critical code?"

- "Whose code is this based on?"

- "Where and when did you learn this?"

- "Why would a US President abstain from using ChatGPT or similar to fill speeches 'just like what I said before'?"

#Burgundy

GPT or similar trained on only Formally-Verified code with associated tests
and/or e.g. Lean Mathlib, or e.g. the Principia in SymPy & Cirq; that could probably eliminate my job, but maybe still not teaching 

On Wed, Jan 4, 2023, 6:28 AM Christian Mascher <christian.mascher@gmx.de> wrote:
Hi,

a student of mine was aware of this chatbot and asked it about a
class-assignment of his own accord. We program in Java with some extra
homemade library class used by some schools in our region.

The bot came up with a "solution" which was flawed in several respects:
1. It used some other (unimported) classes - solution doesn't work and
doesn't fit the assignment.
2. It put all the code into the constructor, a typical (design and
style) error for students beginning with Java.

When confronted with the problem number one above, it acknowledged the
fault and produced a different unrelated solution.

Sooo....

I was impressed how well the chatbot simulated a typical clueless human
who even thinks he is smart, while his code is basically bullshit.
(Probably a result of googling forums, where other learners posted their
solutions to assignments with the given school library classes.) The bot
clearly passed the Turing test ;-)

But...

I don't think the interaction was helpful for somebody who is learning
to program. It is probably less helpful than conversing with other also
not very knowledgeable students as they are at least reasoning humans.

Talking to the bot might be fun to do in the last lesson before
christmas or so. Entertaining until you realise the software is
"simulating" intelligent conversation - not really talking with insight.
And that could turn out to be a waste of time.

Happy new year

Christian

Am 03.01.2023 um 04:06 schrieb Jurgis Pralgauskis:
> Hi, happy NY!
>
> ChatGPT can create, fix and explain code
> https://openai.com/blog/chatgpt/#samples
> <https://openai.com/blog/chatgpt/#samples>
>
> Anyone tried to incorporate it into teaching process?
> Or have ideas/doubts how it ciuld help?
>
> _______________________________________________
> Edu-sig mailing list -- edu-sig@python.org
> To unsubscribe send an email to edu-sig-leave@python.org
> https://mail.python.org/mailman3/lists/edu-sig.python.org/
> Member address: christian.mascher@gmx.de
_______________________________________________
Edu-sig mailing list -- edu-sig@python.org
To unsubscribe send an email to edu-sig-leave@python.org
https://mail.python.org/mailman3/lists/edu-sig.python.org/
Member address: wes.turner@gmail.com