med-mastodon.com is one of the many independent Mastodon servers you can use to participate in the fediverse.
Medical community on Mastodon

Administered by:

Server stats:

351
active users

#hypertext

1 post1 participant0 posts today
Replied to Kent Pitman

[Explicit and implicit hyperlinks]

@kentpitman @amoroso

> links not just being navigational for the purpose of visible reading order, but hidden sometimes, and needing to be teased out. Deep structure, as it were

Some paper books offer a form of that, where end notes (even nontrivial ones) are not marked in the text.

"The world will end when all words in Wikipedia turn blue."
(Attribution unknown)

These late 1980s and early 1990s papers reviewed the state of hypertext research and applications, covering systems such as NoteCards, GNU Info, Intermedia, CREF by @kentpitman, HyperCard, and more. They capture the intense activity and exploration around a still young and rapidly evolving field.

A Survey of Hypertext
csis.pace.edu/~marchese/CS835/

State of the Art Review on Hypermedia Issues And Applications
academia.edu/download/11333046

Continued thread

@screwtape

That's distinguished from the CREF editor that I wrote in 1984, while on leave from my work on the Programmer's Apprentice to do a summer's work at the Open University.

CREF (the Cross-Referenced Editing Facility) was basically made out of spare parts from the Zwei/Zmacs substrate but did not use the editor buffer structure of Zmacs per se. If you were in Zmacs you could not see any of CREF's structure, for example. And the structure that CREF used was not arranged linearly, but existed as a bunch of disconnected text fragments that were dynamically assembled into something that looked like an editor buffer and could be operated on using the same kinds of command sets as Zmacs for things like cursor motion, but not for arbitrary actions.

It was, in sum, a hypertext editor though I did not know that when I made it. The term hypertext was something I ran into as I tried to write up my work upon return to MIT from that summer. I researched similar efforts and it seemed to describe what I had made, so I wrote it up that way.

In the context of the summer, it was just "that editor substrate Kent cobbled together that seemed to do something useful for the work we were doing". So hypertext captured its spirit in a way that was properly descriptive.

This was easy to throw together quickly in a summer because other applications already existed that did this same thing. I drew a lot from Converse ("CON-verse"), which was the conversational tool that offered a back-and-forth of linearly chunked segments like you'd get in any chat program (even to include MOO,), where you type at the bottom and the text above that is a record of prior actions, but where within the part where you type you had a set of Emacs-like operations that could edit the not-yet-sent text.

In CREF, you could edit any of the already-sent texts, so it was different in that way, and in CREF the text was only instantaneously linear as you were editing a series of chunks, but some commands would rearrange the chunks giving a new linearization that could again be edited. While no tool on the LispM did that specific kind of trick, it was close enough to what other tools did that I was able to bend things without rewriting the Zwei substrate. I just had to be careful about communicating the bounds of the region that could be editing, and about maintaining the markers that separated the chunks as un-editable, so that I could at any moment turn the seamed-together text back into chunks.

Inside CREF, the fundamental pieces were segments, not whole editor buffers. Their appearance as a buffer was a momentary illusion. A segment consisted of a block of text represented in a way that was natural to Zwei, and a set of other annotations, which included specifically a set of keywords (to make the segments easier to find than just searching them all for text matches) and some typed links that allowed them to be connected together.

Regarding links: For example, you could have a SUMMARIZES link from one segment to a list of 3 other segments, and then a SUMMARIZED-BY link back from each of those segments to the summary segment. Or if the segments contained code, you could have a link that established a requirement that one segment be executed before another in some for-execution/evaluation ordering that might need to be conjured out of such partial-order information. And that linkage could be distinct from any of several possible reading orders that might be represented as links or might just be called up dynamically for editing.

In both cases, the code I developed continued to be used by the research teams I developed it for after I left the respective teams. So I can't speak to that in detail other than to say it happened. In neither case did the tool end up being used more broadly.

I probably still have the code for CREF from the time I worked on it, though it's been a long time since I tried to boot my MacIvory so who knows if it still loads. Such magnetic media was never expected to have this kind of lifetime, I think.

But I also have a demo of CREF where took screenshots at intervals and hardcopied them and saved the hardcopy, and then much later scanned the hardcopy. That is not yet publicly available, though I have it in google slides. I'll hopefully make a video sometime of that just for the historical record.

3/n

I found the note I left myself from just waking up:

Dreamed I was back in the '80s, writing a Gopher/Markdown-like system on the Atari 8-bit, with the line-oriented editor I wrote on TRS-80, not MEDIT. I was explaining heading levels to someone, and drawing ATASCII banner art for h1, inverse for h2, etc.

This wouldn't be too hard to really do (but really, use MEDIT). Atari DOS is fast enough to load pages on demand.
#dream #atari #retrocomputing #hypertext

👉 "Real-Time History: Engaging with #LivingArchives and Temporal Multiplicities." After a great start yesterday, the conference resumes with a workshop on web archives and the Archives Research Compute Hub by Karl Blumenthal.
Thrilled that I will be able to present my paper "Weaving Time: Hypertextual Historiography in the Age of Living Archives" later in the afternoon.

ℹ️ Check out the conference program: ghi-dc.org/events/event/date/r
#DigitalHistory #DH #Hypertext

www.ghi-dc.orgEvent – GHI Washington

The NoteCards hypermedia system was developed in Interlisp at Xerox PARC by @fghalasz Frank Halasz, Tom Moran, and Randy Trigg. In this 1985 videotape Moran introduced the main concepts of NoteCards, and Halasz demonstrated how to use the system to organize notes and sources for writing a research paper.

archive.org/details/Xerox_PARC

Douglas Engelbart
Born 100 Years Ago Today
30 Jan 1925

“By ‘augmenting human intellect’ we mean increasing the capability of a man to approach a complex problem situation, to gain comprehension to suit his particular needs, and to derive solutions to problems.” — Douglas Engelbart (1962)

An essay on Engelbart’s motivation, with links to video and other historical sources on his life and work.

#Engelbart #hypertext #botd
tractionsoftware.com/traction/

One of the yet-unexplored features of XTXT is its potential to multiplex multiple HTTP connections over a single serial link—without relying on TCP/IP.

In the past, we had solutions like PPP (and later PPPoE), which allowed network-layer communication over point-to-point connections. But those were fundamentally data link layer protocols, operating at a lower level in the OSI model.

XTXT, on the other hand, can be thought of as an application-layer alternative for achieving similar multiplexing, but with much lighter networking demands. It’s designed to work seamlessly on smaller systems by simply exchanging text streams—no need for OSI compliance or the overhead of traditional networking stacks.

With XTXT, all you need is the ability to send and receive structured text. And most computers, even very old ones can do that. The simplicity is the beauty.