med-mastodon.com is one of the many independent Mastodon servers you can use to participate in the fediverse.
Medical community on Mastodon

Administered by:

Server stats:

417
active users

#promptengineering

3 posts3 participants0 posts today

Here is an #AI engine hack you probably didn't even knew you needed.

I've just discovered it and I would like to share.

I run a "pre-prompt" on #ChatGpt.
A kinda dashboard on my sessions and recently I felt compelled to enhance it.
Previously it was in "Settings" "persistent prompt" or some such.
Sammy renamed the field to "Tell us more about yourself". But you can keep it as a prompt.
E.g. Count the words in my prompt.

That's Hack 1.

Hack 2 is the goodie.
It's only limited to 1500 characters, and if you want more, you're screwed, even if you make it super concise.

So, because #LLM does not care what language it works in, I asked it to use Asian language and it came up with #Chinese/ #Japanese (Japneese?) hybrid (or so it says).

It still formats the response in English because I instructed it. But it's super dense and way below the 1500 so I can add more instructions should I want to.

I've set it to update the session status every 10 prompts.

"This course is intended to provide you with a comprehensive step-by-step understanding of how to engineer optimal prompts within Claude.

After completing this course, you will be able to:

- Master the basic structure of a good prompt
- Recognize common failure modes and learn the '80/20' techniques to address them
- Understand Claude's strengths and weaknesses
- Build strong prompts from scratch for common use cases

Course structure and content

This course is structured to allow you many chances to practice writing and troubleshooting prompts yourself. The course is broken up into 9 chapters with accompanying exercises, as well as an appendix of even more advanced methods. It is intended for you to work through the course in chapter order.

Each lesson has an "Example Playground" area at the bottom where you are free to experiment with the examples in the lesson and see for yourself how changing prompts can change Claude's responses. There is also an answer key.

Note: This tutorial uses our smallest, fastest, and cheapest model, Claude 3 Haiku. Anthropic has two other models, Claude 3 Sonnet and Claude 3 Opus, which are more intelligent than Haiku, with Opus being the most intelligent.

This tutorial also exists on Google Sheets using Anthropic's Claude for Sheets extension. We recommend using that version as it is more user friendly."

github.com/anthropics/courses/

Anthropic's educational courses. Contribute to anthropics/courses development by creating an account on GitHub.
GitHubcourses/prompt_engineering_interactive_tutorial at master · anthropics/coursesAnthropic's educational courses. Contribute to anthropics/courses development by creating an account on GitHub.

"When thinking about a large language model input and output, a text prompt (sometimes accompanied by other modalities such as image prompts) is the input the model uses to predict a specific output. You don’t need to be a data scientist or a machine learning engineer – everyone can write a prompt. However, crafting the most effective prompt can be complicated. Many aspects of your prompt affect its efficacy: the model you use, the model’s training data, the model configurations, your word-choice, style and tone, structure, and context all matters. Therefore, prompt engineering is an iterative process. Inadequate prompts can lead to ambiguous, inaccurate responses, and can hinder the model’s ability to provide meaningful output.

When you chat with the Gemini chatbot, you basically write prompts, however this whitepaper focuses on writing prompts for the Gemini model within Vertex AI or by using the API, because by prompting the model directly you will have access to the configuration such as temperature etc.

This whitepaper discusses prompt engineering in detail. We will look into the various prompting techniques to help you getting started and share tips and best practices to become a prompting expert. We will also discuss some of the challenges you can face while crafting prompts."

kaggle.com/whitepaper-prompt-e

www.kaggle.comPrompt Engineering

Vibey (Worker) comparison
between #o4 #Chatgpt and #Claude Sonnet 3.7

So recently I got a new CC and had difficulty getting it in #Antrophic. Because I have grown reliant on the PRO model in my daily. I paid the #AI tax to #OpenAI.

Here is my experience.

1. I'll restate this because it needs restating. The free models are dumber. The only meaningful assessment can come from the pay-for model.

2. AI moves at breakneck speed a month in AI is worth at least 6 elsewhere. Would you believe there are still 6-finger jokes floating around, even though current pro Gens done that for a year+.

3. The new ChatGpt model definitely seems smarter.
It seems to unnecessarily burn compute though, offering multiple solutions to issues.
I liked how it quickly adapted it's persona to my work style.

4. I like the new "vibe coding" refactoring, where it will go line by line through the code changing it. Very SciFi.

5. The new Pro sub for OpenAi comes with Gen subs (value+) so you can create images (Anthropic doesn't have that).
Also #Sora sub so you can make 10s videos, if you have seen Sora videos, they are mind-blowing.

7. It has another model called "Monday" which just works like an asshole prompt. Another proof that most users still have a lot of ground to cover in #promptengineering

Overall, I think PRO ChatGpt is slightly better than Claude, though I have gotten used to Claude.

"Prompt Engineering" for AI is this today's version of "Don't hold it that way" for the iPhone 4.

Users are misassigned blame for fundamental flaws in the technology, and are instructed to adopt behavioural workarounds. These improvised habits lack the causal power to fix underlying problems in the tech, but they serve to reinforce the notion that this new tech is superior to the tech it's trying to replace or "disrupt". Furthermore, users are taught, "Just keep trying and you'll get it right," without questioning whether the new tech is the problem, or to ask if the new tech has the potential to ever deliver on its promises.

A crucial difference between early smartphones and wishing that LLMs are a route to "Thinking Machines" is: later models of phones successfully matured the engineering of antennas and improved mobile reception, but LLMs are a dead-end that can never lead to real Artificial Intelligence.

This can be summarised by the AM/FM Principal: Actual Machines in contrast to Fucking Magic.

A Prompt Pattern Catalog to Enhance #PromptEngineering with #ChatGPT arxiv.org/abs/2302.11382

arXiv.orgA Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPTPrompt engineering is an increasingly important skill set needed to converse effectively with large language models (LLMs), such as ChatGPT. Prompts are instructions given to an LLM to enforce rules, automate processes, and ensure specific qualities (and quantities) of generated output. Prompts are also a form of programming that can customize the outputs and interactions with an LLM. This paper describes a catalog of prompt engineering techniques presented in pattern form that have been applied to solve common problems when conversing with LLMs. Prompt patterns are a knowledge transfer method analogous to software patterns since they provide reusable solutions to common problems faced in a particular context, i.e., output generation and interaction when working with LLMs. This paper provides the following contributions to research on prompt engineering that apply LLMs to automate software development tasks. First, it provides a framework for documenting patterns for structuring prompts to solve a range of problems so that they can be adapted to different domains. Second, it presents a catalog of patterns that have been applied successfully to improve the outputs of LLM conversations. Third, it explains how prompts can be built from multiple patterns and illustrates prompt patterns that benefit from combination with other prompt patterns.