:: Re: [Dng] OT: separate GUI from com…
Página Principal
Delete this message
Reply to this message
Autor: Jude Nelson
Data:  
Para: Hendrik Boom
CC: dng@lists.dyne.org
Assunto: Re: [Dng] OT: separate GUI from commands (was: Re: The more things change, the more they remain the same
Hi Hendrik,

On Wed, May 27, 2015 at 6:12 AM, Hendrik Boom <hendrik@???>
wrote:

> On Wed, May 27, 2015 at 03:32:00PM +0200, Laurent Bercot wrote:
>
> > As a rule of thumb, developers should always use the smallest possible
> > amount of dependencies for a project, and clearly separate layers - e.g.
> > if a project comes with an engine and a GUI, then the engine should come
> > as a command-line tool in its own package with no dependencies on
> graphical
> > stuff, which would be the role of the GUI package. But I'm a dinosaur AND
> > a fan of minimalism, so I don't expect many people to think like me.
>
> I mean this as a serious question, not as a piece of sarcasm,
> though it could easily be misconstrued as such.
>
> I am testing the limits of the advice to separate interaction from
> command line, though. Sometimes limit testing leads to discoveries.
>
> I'm in the process of writing (yet) a(nother) editor and output formatter,
> and on reading this, I started to wonder -- just how could one separate
> a command-line version from the UI? I can see that the output
> formatter can be so separated (and very usefully), but the actual
> editing?
>
> Brainstorming welcome.
>


I've been thinking about the general form of this problem (and a general
solution) for months, and I think I'm almost ready to provide a formal
design document.

[The Problem]
Many programs couple the business logic to the presentation logic. The
code that drives the UI is too intermingled with the code that provides the
application's core functionality to use them separately, or develop on them
independently. However, we would like to be able to do this in order to
allow the user to apply existing tools and skills in one domain without
affecting the other domain, and allow developers to innovate in one domain
without affecting the other. This should be possible, since rendering a UI
is usually (but not always) a separate, orthogonal concern from performing
the desired functionality.

[Motivation]
I was watching my SO create a (long) Powerpoint presentation the other day,
and she wanted to add a "fade in" animation to each bullet point on each
slide. Powerpoint (and every office suite I'm familiar with) makes you do
this by hand, bullet-point by bullet-point, with a mouse. The UNIX hacker
in me saw this, and wondered why it's not possible to simply open a shell
and run something like "for ppt_bullet in $(ppt-ls "bullets"); do
ppt-animate "fade-in" "$ppt_bullet"; done". This would have saved an hour
of tedious, repetitive effort.

The reason this isn't possible is because Powerpoint (and every other
office suite) does not offer a command-oriented UI. What could Powerpoint
have done? One option is to add a program-specific command-language (like
how vim and emacs do). However, I don't think this the answer either, for
two reasons:
(1) The UI (the command-language in this case) is still tightly coupled to
the program, and has rules and semantics that are only applicable to the
program.
(2) Without careful design, the user can't bring to bear existing knowledge
or existing tools to address problems. This is a general gripe I have
against vim and emacs too--if you're going to offer a Turing-complete
command language, why not offer one that is widely-used already and lets
people use external tools?

What I would like the most is if Powerpoint were somehow designed so that
its runtime state was exposed to the system shell, so I could operate on it
directly. Powerpoint wouldn't need to ship with its own custom language,
and I could use whatever tools I wanted.

[Related Work]
This is something that Smalltalk/SQUEAK got right, as well as SecondLife
(but to a lesser extent). In these environments, a UI element is fully
scriptable by the user, and can access functionality in other UI elements
and store persistent state within them. As a user, I can easily add my own
functional overlays on top of the application, as well as my own custom UIs
that extend the functionality of the application on-the-fly. If Powerpoint
were written as a SQUEAK application, I might have been able to write and
attach a button on-the-fly that iterated through each bullet point and
added an animation.

This is something that RESTful Web services sometimes get right as well.
The business logic in a RESTful Web service is meant to be decoupled from
the CSS/HTML/Javascript UI, so you should be able to use use any
HTTP-speaking client to interact with the business logic on the remote
servers. For example, this lets Web front-end developers create mash-ups
of multiple services, and it lets devops and back-end developers write
scripts with curl to interact with services. The downside is that you're
limited by the availability and expressiveness of the RESTful API (which
you do not control).

Applescript, ActiveScript, and nowadays Powershell try to do something like
this, but they fall short architecturally in that they can't access and
operate on all of the application's state (i.e. the application is like a
server that exports an API, akin to REST), and they sometimes require the
application to do some (potentially side-effect-producing) computation on
access. I think this has more to do with the fact that these systems were
designed after-the-fact--the applications they interact with weren't ever
designed with scriptability in mind.

[Design Notes]
The idea I I'm working on can be thought of as a re-telling of the SQUEAK
story for the UNIX environment:
* The shell is the binding language between different aspects of the
application, as well as between the application and external tools (this
replaces Smalltalk).
* Persistent state is encoded as files
* Applications are encoded as directory hierarchies, where:
-- a directory represents either one aspect of the program (i.e. a dialog
box, a text area, a file explorer, etc.), or an aggregate of multiple
aspects.
-- leaf files are either persistent state for the aspect that their parent
directory represents, or executables that should be run in response to an
external event (i.e. from the user, from the passage of time, or from
another program).

For example, a simple network manager might look like:

$ cd network-manager/ && find .
.
./window-main
./window-main/panel-buttons
./window-main/panel-buttons/button-Cancel
./window-main/panel-buttons/button-Cancel/data
./window-main/panel-buttons/button-Cancel/on_click.sh
./window-main/panel-buttons/button-Connect
./window-main/panel-buttons/button-Connect/data
./window-main/panel-buttons/button-Connect/on_click.sh
./window-main/selectbrowser-networks
./window-main/selectbrowser-networks/on_timer.sh
./window-main/selectbrowser-networks/data
./window-main/selectbrowser-networks/on_select.sh
./window-main/menubar-main
./window-main/menubar-main/help
./window-main/menubar-main/help/002-Credits
./window-main/menubar-main/help/001-About
./window-main/menubar-main/file
./window-main/menubar-main/file/001-Quit

What I was thinking of doing is creating a general-purpose interpreter that
could read this directory tree and present it as a UI (i.e. the equivalent
of SQUEAK). There would be an interpreter implementation that rendered the
application with whatever UI flavor you wanted--GTK, Qt, FLTK, ncurses,
interactive command prompt, etc., but would do so independently of the
business logic (i.e. listing and connecting to networks). Instead of
trying to provide its own business logic, the interpreter would simply run
the application's executable files in response to well-defined events (i..e
you click something). As a user, I would be able to add or rearrange UI
elements or add new functionality simply by editing the appropriate files
and directories on-the-fly, and I could extend the business logic (i.e.
make it better at finding and connecting to networks) without affecting the
UI or other aspects of the program.

The system would know what kind of UI elements to render and how to arrange
them based on hints from the directory hierarchy. In this example, an
X11-oriented interpreter would create a main window with a bar of panel
buttons for connecting or cancelling, a menu bar with "file" and "help"
submenus, and a listing of available networks. It would run
"selectbrowser-networks/on_timer.sh" to repopulate the network listing
every so often, and would run "selectbrowser-networks/on_select.sh" when
the user picked one. "selectbrowser-networks/on_select.sh" would write the
selection data to "selectbrowser-networks/data". It would run
"button-Connect/on_click.sh" when the user pressed the "connect" button,
which in turn would pull the selection from the
"selectbrowser-networks/data" file and run ifconfig, iwconfig, ip, iw, etc.
to set up the connection to that network.

Of course, you wouldn't be limited to using an X11-oriented UI, either. An
ncurses implementation could render the above in a shell window, and a
Wayland implementation could render the above in a Wayland session, for
example.

An interesting consequence of structuring applications this way is that you
can trivially solve both network transparency and application distribution:
simply put the application on an NFS mount (or some network filesystem),
and run the interpreter on the local client. The application stores
runtime state as files within its directory hierarchy, so they get
automatically written to the NFS server. Users could run remote
applications locally simply by mounting the remote volume and running the
interpreter on them in-place.

I was considering building such an interpreter as well as and a suite of
applications for Devuan at some point, after finishing vdev. I was
thinking of calling it "shui"--both from the concept of "feng shui" (the
idea of harmonizing everyone with their environment), as well as serving as
an acronym for "SHell-oriented User Interface".

-Jude