:: Re: [DNG] kernel drivers [WAS: How …
Etusivu
Poista viesti
Vastaa
Lähettäjä: Didier Kryn
Päiväys:  
Vastaanottaja: dng
Aihe: Re: [DNG] kernel drivers [WAS: How long should I expect to wait for openrc to be ready in devuan ascii]
Sorry if this is becommint a very specialized discussion. I think
this thread will not need to continue very long.

Le 05/07/2017 à 22:53, Enrico Weigelt, metux IT consult a écrit :
> On 05.07.2017 06:38, Didier Kryn wrote:
>
>>      Also the API of their driver looked in contradiction with the 
>> one of Gabriel Paubert who has been developping a discontinued suite 
>> of free VME drivers for Debian. I talked with Gabriel Paubert more 
>> than a decade ago because I have been using his driver; it happened 
>> that we have completely different views on what VME is made for. 

>
> hmm, never that any VME device on the table ... isn't this just yet
> another memory/peripheral bus ? (after a quick look, seems it could
> even be driven by gpmc).
>
> I'd imagine it wouldn't take much more than for the usual bus bridges
> (pci, gpmc, ...). Don't recall how it was in 2.6 times, but nowadays
> this isn't really a big deal. Usually those bridges just map memory
> regions to the platform/cpu bus (IOW: cpu potential other busmaster-
> capable neighbords can directly access the memory/registers, once
> the bridge is set up). Anyways, VME support seems to be mainlined
> for quite a long time.

     It looks, prima facie, like mapping memory, but it can be other 
things than memory behind. And there are a whole menagerie of addressing 
and transfer protocols, including chained dma transfers. In our case it 
is waveform digitizers and the interaction is so complex that it cannot 
be done in a kernel module; therefore we invoke the "raw" VME driver 
directly from user space. Some others consider there should be one 
driver per VME device you want to talk to, on top of the raw VME driver. 
These people only ever had to deal with simplistic devices. Gabriel 
Paubert has never seen an application transfering more than a few bytes, 
while we are transferring data at 320MB/s.


>
> If anybody has some real VME machines, I'd be curious to have closer
> look at them.
>
>> Finally, a few years ago, a group devised a VME API for Linux every
>> driver has to comply with, and Emerson followed the prescription. The
>> API was very similar with the one of Paubert, but it also contained
>> some absurdity,
>
> Which absurdities, exactly ?

     Bus errors were not reported to the application but instead caused 
a message to syslog! This makes it hard to use; or your application 
should monitor the log files!


>
>> and, since my project was reaching the end, I just stopped upgrading
>> the kernel and staid with version 2.6.27. The machines are still
>> running Debian Wheezy with that old kernel.
>
> Well, if there's really no reason to touch the machine at all, it might
> be a good idea to leave it as it is. OTOH, there can be many reaons for
> upgrading at least some parts of the OS, and then you could get into
> trouble w/ that ancient kernel.


     Yes, Udev refused to install. I first used static devices. Later I 
started to experiment with Mdev :-)


>
> I currently have such case w/ much newer devices (Duagon Ionia -
> industrial control and daq machines, primarily for railway systems),
> Their factory kernel is pretty old and built w/ the hot needle (even
> built ontop an hackish TI vendor kernel), and the whole driver stack
> for the control/DAQ cards lives entirely in userland (directly polling
> on mmap()ed register spaces!), w/ some horribly complex and practically
> unmaintainable library (which seems to have a really long history and
> even wasn't made for Linux/Unix). As we have to support different HW and
> configurations (some customers even have their own boards), we need a
> consistent interface -> IIO. And we need newer kernel features, eg.
> containers, rt scheduling, etc. So, here I am, reverse engineering the
> old stuff and writing a new driver stack on 4.12 ...
>
>>      VME started in the 70's, and it has taken until ~ 5 years ago to 
>> have a VME API on Linux. In the mean time users have lived with 
>> vendor's BSPs. As a client, I didn't question the reason of all this; 
>> I made what I needed to get the job done.

>
> Well, especially in those niches, those vendor BSPs tend to be difficult
> (I never use them). Usually, they add proprietary APIs, so you have a
> vendor-lockin, their patches tend to be hard to port, and so you're
> dependent on their support.


     This isn't the case here. AFAIU, their BSP contains a VME driver 
and a device tree. I think it's all GPL. Their buzyness is to send 
hardware and they make sure their clients can use it.

>
>
> My default approach is moving to mainline kernels and only patch up
> what's needed - using the proper kernel infrastructures (eg. IIO for
> DAQ devices, instead of proprietary devices).
>
> In the long run, that's (to my experience) way cheaper than coping w/
> any vendor BSPs.
>
>

     These are SBCs with everything soldered: VME, flash memory, UARTs, 
USB hub, Sata controllers, Ethernet ports etc... And you just plug it 
into a VME crate. It makes no sense to develop a board like this to use 
a dozen of them. We made the digitizers and it was already a big deal 
for our small team.


     Could you develop what is IIO for DAQ devices? You excited some 
hidden neurons in my brain :-) Is it something allowing to write drivers 
in user space?


     Cheers.
                             Didier