What if it was built right into a stack?
Generate code, content, whatever …
What if it was built right into a stack?
Generate code, content, whatever …
So I have been using the free version of ChatGPT for a couple of weeks. I have found it is very smart but has no wisdom. It’s like a college graduate with no street sense. It often does not know how topics go together in a real life application. A lot of theory but very little experience. Some things it does well. But mostly it is like holding a “new hire’s” hand on his/her first job.
What have you been using it for? I find it rather good at most things I’ve asked it about: its knowledge of languages and literatures, for instance, is astonishing. And whilst it shouldn’t have any wisdom, I often find its answers contain really quite a lot of wisdom. Of course, that is because it has been trained on materials that include much of the wisdom of our species. For this reason I find myself keeping many of the things it has written for me, because they are worth returning to. I have a similar reaction to Midjourney: while the whole genre of computer rendered imagery, from games design to Pixar animations, had an uncomfortable feeling of shallowness for me, like a film set, I find Midjourney’s productions (across its incredibly wide variety of genres) draw me in, as if everything has intriguing depths to it. I certainly wasn’t expecting this of AI, but I am enjoying it enormously.
And here’s a nice little example.
What is the difference between wisdom and knowledge?
Wisdom and knowledge are related concepts, but they are not interchangeable. Knowledge refers to the information, facts, and understanding that one acquires through learning and experience. It can be acquired through education, training, research, and personal experience.
On the other hand, wisdom is the ability to apply knowledge in a practical and meaningful way. It involves a deep understanding of human nature, the world, and oneself, as well as the ability to make sound judgments and decisions based on that understanding. Wisdom often requires experience, reflection, and insight into the complexities of life.
In short, knowledge is what we learn, while wisdom is how we apply what we learn in the real world. While knowledge is important, wisdom is often considered to be a higher level of understanding that allows one to navigate the complexities of life and make better decisions.
Mostly Christian research projects. It’s been great for “keyword“ research or for “category” research. However it sticks things in that have nothing to do or worse are the exact opposite of traditional Christian beliefs. It also likes to stick a bunch of hot button topics in. Often jamming them in where it just simply doesn’t belong.
I have used it on other things. Same result but not as obviously contradictory, rather it is just out of place.
My point is, it’s obvious to me this is just fancy programming with propaganda mixed in.
It’s still useful, but I have to edit at least 50% or more of every 800 words.
One of the themes Chat GPT constantly reiterates is that it is there as an assistant, and indeed this is what it does best. I find it makes excellent summaries of just the right length, which are information-rich but also accessible and easily readable. But I don’t want it to write for me — that would be taking my voice away from me. Reworking what it comes up with is just perfect.
But there is much more that it can do. I asked it to translate a mediaeval Arabic poem the other day: we cycled through different versions, each of which had its merits. I asked it to parse each word of the original text — to identify what part of speech it is, and what it means. This is a huge boon for translating and learning. You could do the same with Hebrew, Aramaic and Greek: now we don’t need to know the original languages to be able to get to grips with the text. That’s an amazing thing!
Once we had a translation I was happy with, I asked it to translate the original in the same way into… I pulled three languages out at random: Italian, Tagalog and Japanese. 10 seconds. Pasting these into Google Translate to check out how well it had done, the re-translations back into English were flawless.
As for propaganda, what you have to bear in mind is that the data sets that these LLMs are trained on are so vast no human being could possibly ‘game’ them. If it seems like Chat GPT has a point of view — and to my mind, that point of view is a refreshing openness — it is because that point of view is implicit in the collective body of human knowledge. Those who have contributed to that body of knowledge, across the ages and around the world, have tended to be rigorous but open minded in the search for truth, pro-human, and concerned with the welfare of human beings. And those are the biases Chat GPT reflects.
For my part, I’m just immensely happy to have lived to see the advent of this technology. And I’m having a lot of fun with it.
Mark, you might get (a lot) better results with writing better prompts. There’s a ton of videos on Youtube on ChatGPT prompting and how to improve the results you get.
In addition to what James wrote: aside wisdom and knowledge there’s data and information. ChatGPT is a language model, it uses data. It doesn’t know anything, it uses that bulk of data and depending on the prompt it get, it spits out a, often, coherent string of sentences. Best example of this is when it stops half way and you prompt “continue”, you could also paste part of your own writing, add a blank line and then write “continue” it will finish your writing in the most, kind of, predictable way. I oversimplify, as what’s happening in the background is very complicated and even the creators don’t know exactly how it does what it does. Ok, so back to the bulk of data: because it’s output is based on the bulk of data, it’s also skewed by what data it has been fed and restricted in it’s replies. There are creative ways around it by the way.
And then there’s this… (untried so far)
Chat GPT, Code me a plain Javascript function which takes as input a text, the width of the text container, and the styles which have been applied to that text, and then adjusts the font size of each line of that text separately so that the line exactly fills the width of the text container.
Here’s an implementation of the function you described:
function adjustTextWidth(text, containerWidth, styles) {
const canvas = document.createElement('canvas');
const context = canvas.getContext('2d');
let fontSize = styles.fontSize || '16px'; // default font size
let fontFamily = styles.fontFamily || 'Arial'; // default font family
context.font = `${fontSize} ${fontFamily}`;
const lines = text.split('\n');
let maxFontSize = parseFloat(fontSize); // maximum font size for any line
let maxLineWidth = 0; // maximum width of any line
// determine maximum font size and maximum line width
for (let i = 0; i < lines.length; i++) {
const lineWidth = context.measureText(lines[i]).width;
maxLineWidth = Math.max(maxLineWidth, lineWidth);
let lineFontSize = parseFloat(fontSize);
while (context.measureText(lines[i]).width > containerWidth) {
lineFontSize -= 0.5;
context.font = `${lineFontSize}px ${fontFamily}`;
}
maxFontSize = Math.max(maxFontSize, lineFontSize);
}
// set font size for each line
let result = '';
for (let i = 0; i < lines.length; i++) {
let lineFontSize = parseFloat(fontSize);
while (context.measureText(lines[i]).width > containerWidth) {
lineFontSize -= 0.5;
context.font = `${lineFontSize}px ${fontFamily}`;
}
result += `<div style="font-size: ${lineFontSize}px">${lines[i]}</div>`;
}
return result;
}
Here’s an explanation of how the function works:
styles parameter (or use defaults).text parameter into an array of lines.div element with the appropriate font size.Note that this function assumes that the text container has a fixed width and that the text parameter contains line breaks (\n) to separate the text into lines. If you need to adjust for different container widths or text formats, you may need to modify the function accordingly.
On this I disagree, but it is what it is.
And there in is the issue.
While Christianity is of course “pro-human” it is centered around whom we believe to be our Creator and His love for us. There are more writings and “raw data” on this subject than any other subject in all of history. It is just surprising to me how “filtered” it is. If not filtered than injected with non-Christian beliefs.
I will check into this, thanks!
With respect, Mark, I think you may be complaining that it is not gamed enough. When it comes to matters of belief, an LLM has to accurately describe the beliefs of a group of people without being seen to either validate or discredit those beliefs. That is a thankless task when a person is asked to do it, but — to my mind, anyway — Chat GPT does it gently and gracefully. But there are exceptions. A facebook friend who believes in conspiracies was very unhappy that Chat GPT had shown little sympathy with the idea of ‘contrails’, and had pointed out that this was an unscientific and unevidenced belief. When I heard that I closed my eyes and said a silent ‘Thank You!’ to the universe.
There’s quite a discussion about it’s neutrality. When it was first launched some used it (deliberately or for fun) to produce extreme output. It turned out ChatGPT was “biased” towards, let’s call it the general view on the world of about 15 years ago (so think pre black lives matter, lbtqxyz±). That was “corrected”, so now it can write lovely poems about Biden but not about Trump, won’t criticize Bill Gates nor Biden (through a/o role playing you can get around this.). In general it’s quite liberal and not so conservative.
I think it’s because of this you aren’t getting the results you’re looking for with a generic prompt, but I also think that the more specific you are the better the results. I get better results when I keep instructions short and building on top of one another.
I am doing research on how to better prompt. I thank you all for your advice.
FYI: I never expected the computer to understand things of the heart and soul. Having said that, it is just surprising to me how contradictory/tainted the information can be.
Hej
I think that it is important to understand that AI language models are not conscious or sentient beings; they are advanced computer programs designed to process and generate text. Their operation consists of several stages:
1 Learning how words relate to each other: This helps them understand the structure and meaning of sentences.
2 Reading lots of text: By trying to guess the next word in a sentence, they learn grammar, facts, and patterns in human thought.
3 Fine-tuning: They are adjusted and refined based on specific examples to help them generate more accurate and relevant responses.
Through these steps, AI language models can understand and respond to human input in a way that appears intelligent and natural. However, it’s important to remember that they are ultimately just computer programs analyzing patterns in data.
It’s great fun playing with Chat GPT,
“Could you tell me the prinicples of quantum physics in 250 words in the style of Norman Wisdom?”.
But what happens when AI becomes so popular and eventually the norm (no Norman Wisdom pun intended). Where will it derive its references if virtually everything it is gleaning information from is also AI generated.
It’ll be like digital inter-breeding, it will start to mutate and develop deformaties. Now that will be exciting!
It’s worth noting that nothing we’re seeing so far is AI generated. These are very sophisticated and intelligent responses to human queries, accessing a body of work (whether in images or writing) that humans have produced. It would be an even bigger step for AI systems to start generating their own materials, musing on the meaning of silicon life, or creating images from their own imaginations. Personally, I’m not convinced that will ever be possible.
At this stage, I’m much more interested in how we can use what we have here. For instance, if you fancy enlivening your website with, say, quirky flat-color illustrations, you can get Midjourney to produce some really stunning pieces. Illustrators will be, quite reasonably, upset. But I suspect most of us don’t have the budget to pay an illustrator $2000-3000 for, say, twelve illustrations. And just as RW/Stacks made it possible for us to do many of the things that corporations with big budgets were doing on the web, so AI can dramatically up our game again. Multi-language? Fluent pieces without Google Translate glitches are at our fingertips. In the next 12 months, while I’m waiting for StacksPro (let alone an AI powered website builder) there is a whole bunch of really exciting new stuff I can be doing on my sites. Even generating interesting new blog content has been supercharged.
Absolutely to all. I think its marvellous. I’m finding that it is the ingredients that produce the most tasty results, meaning the more you craft your questions or criteria you get some amazing results, both Chat GPT and the image makers. I’ve found that my requests become almost literary pieces of their own.
I think the idea of ChatGPT has been around a long, long, long time, and even the Lord himself made a declaration of what would result.
“The Lord said, “If as one people speaking the same language they have begun to do this, then nothing they plan to do will be impossible for them.” (Genesis 11:6)
I think there will come a time when one language will exist, and that time may be now, where the one language is essentially what creates the Internet and all things related: html/css/markdown/php/code/coding/etc – and now ChatGPT – with its trend toward nonChristian beliefs/positions.
When the time comes that all people who lived before the Internet have died, I think people who never lived before the Internet, many if not most of them will come to believe the Internet is God. Already I can ask my Amazon Echo, “What’s the weather?”, and I receive an instantaneous answer.
Will there come a time when this is believed to be an answer to prayer?
Like all of you I’m writing this from a device that has a logo (and the word logo means Word) with a bite taken out of a piece of fruit.
“but you must not eat from the tree of the knowledge of good and evil, for when you eat of it you will surely die.” (Genesis 2:17)
Without taking the subject, too far off topic — I understand that the topic of religion is subjective to most people. Obviously to me, it’s a black-and-white issue.
The root of my comment about not having a heart and soul is the same difference between humanity and the rest of creation. One was created in the image of God, all the rest were and are not. Including, the Internet and all of the technology that is available on it.
I really don’t want this topic to get so far off track that we lose sight of the common grace of ChatGPT. It is just a tool. My comment was it’s a limited tool in regards to what I need/would like it to do.
But I have a lot of room to learn about how to use this tool. And for that, I thank you all for responding on my comments.

😂🤣