I have been looking forward to the Raspberry Pi Pico 2 for a while. The increased speed will be welcome for one project I want to use it for.
However, according to what I've read there seem to be some missed opportunities. Corrections would be welcome but AIUI:
* There are still no analogue /outputs/ - which I would have thought
would have been cheap and easy to provide.
* Although there are now 3 PIO units each one still has only 32 words of program memory (which was very limiting in the Pico). I assume that
because the RP2350 is supposed to be software compatible meaning, among
other things, that it should run RP2040 PIO code.
Of course, it could be possible to run into more PIO memory than 32
words as long as the requisite jumps were back to the initial 32-word
space but that would be awkward to use, and limiting.
* There are still no analogue /outputs/ - which I would have thought
would have been cheap and easy to provide.
I suspect the analogue drive circuit is not that simple - would take
area and may compromise power, especially idle power.
I would have thought that such a circuit could be gated to reduce/remove power consumption. DAC needs far less circuitry (and no iteration) than ADC.
I was hoping for analogue output pins initially in order to drive a VGA display.
Would PWM work for that, do you think? My guess is that it
would be too slow or jittery.
I have seen Pico set up to drive VGA through five resistors per colour.
It may be a Pico reference design but there's a great version of it at
https://vanhunteradams.com/Pico/VGA/VGA.html
The problem is that for three colour guns it takes fifteen pins!
Or, you could bring back the old PDP-8 trick, of having the jump address
be in the *current page*. That means the upper bits are taken from the current program counter.
On 17/08/2024 08:22, Lawrence D'Oliveiro wrote:
Or, you could bring back the old PDP-8 trick, of having the jump address
be in the *current page*. That means the upper bits are taken from the
current program counter.
That must be damn annoying to program in assembler.
Probably slows compilers down a fair but too.
IIRC 8086 was the same for conditional jumps.
In fact the whole small model thing was coding for a 64k page.
There were a few instructions that worked across pages...but my
memory is dim.
On Thu, 2024-08-22 at 09:17 +0100, The Natural Philosopher wrote:
IIRC 8086 was the same for conditional jumps.
In fact the whole small model thing was coding for a 64k page.
There were a few instructions that worked across pages...but my
memory is dim.
FAR and NEAR specifiers used with JMP were often used for intra segment calls. Data could also be accessed with the same specifiers too, hence
there were five different memmory models, tiny, small, medium, large
and huge with different code and data accesses.
FAR and NEAR specifiers used with JMP
FAR and NEAR specifiers used with JMP
The obscenity was these qualifiers making their way into C
source
code - try writing (or even reading) the declaration for a near
pointer to a function returning a far pointer to an array of
functions returning near pointers to integers.
Then realise that you *alos* wanted this source code to be
portable.
Which was why one used macros to conditionally compile code on MSDOG
and other platforms. And there were macros for far pointers, MK_FP()
for example. I think I have the remains of a hex disk editor
somewhere in my archives written in C with some assembler using
Zortech C.
On Thu, 2024-08-22 at 12:09 +0100, Single Stage to Orbit wrote:
Which was why one used macros to conditionally compile code on MSDOG
and other platforms. And there were macros for far pointers, MK_FP()
for example. I think I have the remains of a hex disk editor
somewhere in my archives written in C with some assembler using
Zortech C.
So I've had a rummage around, and the original diskedit sources isn't
with us anymore, but it looks like I ported it to windows 95 and threw
out all the MSDOGisms.
I do wish I still had the original sources though, but it's lost in the
mists of time.
So I've had a rummage around, and the original diskedit sources
isn't with us anymore, but it looks like I ported it to windows 95
and threw out all the MSDOGisms.
I do wish I still had the original sources though, but it's lost in
the mists of time.
What happened to backups?
On Thu, 22 Aug 2024 11:00:26 +0100
Single Stage to Orbit <alex.buell@munted.eu> wrote:
FAR and NEAR specifiers used with JMP
The obscenity was these qualifiers making their way into C source
code - try writing (or even reading) the declaration for a near pointer to
a function returning a far pointer to an array of functions returning near pointers to integers.
Then realise that you *alos* wanted this source code to be portable.
FAR and NEAR specifiers used with JMP
The obscenity was these qualifiers making their way into C
source
code - try writing (or even reading) the declaration for a near
pointer to a function returning a far pointer to an array of
functions returning near pointers to integers.
Then realise that you *alos* wanted this source code to be
portable.
A bloody pain in the ass, all of it. Forget the 640K barrier -
I was much more concerned with the 64K barrier. I wound up
writing pointer normalization routines and all sorts of other
hacks to handle large tables - and still keep it compatible
with *n*x. I only recently stripped out all that crap.
Good riddance.
On Thu, 22 Aug 2024 11:00:26 +0100
Single Stage to Orbit <alex.buell@munted.eu> wrote:
FAR and NEAR specifiers used with JMP
The obscenity was these qualifiers making their way into C source
code - try writing (or even reading) the declaration for a near pointer to a function returning a far pointer to an array of functions returning near pointers to integers.
Then realise that you *also* wanted this source code to be portable.
On 22/08/2024 11:57, Ahem A Rivet's Shot wrote:
On Thu, 22 Aug 2024 11:00:26 +0100I don't recall them ever appearing in C source. They are not part of the
Single Stage to Orbit <alex.buell@munted.eu> wrote:
FAR and NEAR specifiers used with JMP
The obscenity was these qualifiers making their way into C source
code - try writing (or even reading) the declaration for a near
pointer to a
function returning a far pointer to an array of functions returning near
pointers to integers.
Then realise that you *also* wanted this source code to be portable. >>
C language.
AIR you could compile for 'small model' or 'large model'
And with the early compilers I used, no attempt was made to think about whether a jump was near or far.
I think you got an assembler or linker error if a target was 'out of
range'
On 22/08/2024 11:57, Ahem A Rivet's Shot wrote:
On Thu, 22 Aug 2024 11:00:26 +0100
I don't recall them ever appearing in C source. They are not part of the
C language.
AIR you could compile for 'small model' or 'large model'
On Fri, 23 Aug 2024 10:17:56 +0100
The Natural Philosopher <tnp@invalid.invalid> wrote:
On 22/08/2024 11:57, Ahem A Rivet's Shot wrote:
On Thu, 22 Aug 2024 11:00:26 +0100
I don't recall them ever appearing in C source. They are not part of the
C language.
No they are not - but they did appear as extensions in every
8086/80286 compiler I ever used.
AIR you could compile for 'small model' or 'large model'
There were several models available (tiny had one segment shared for code and data, small had separate code and data then there were mixed small code large data and vice versa). These set the memory mapping and the
default sizes of pointers but the near and far keywords could override
those defaults in some models.
It was *horrible*, the 80386 was a breath of fresh air.
Not in any compiler I used.
On 23/08/2024 13:38, The Natural Philosopher wrote:
Not in any compiler I used.
It was in 16bit MS C Compiler, Watcom C, Borland C, Zorland/Zortech
C/C++.
On Fri, 23 Aug 2024 10:17:56 +0100
The Natural Philosopher <tnp@invalid.invalid> wrote:
I don't recall them ever appearing in C source. They are not part of the
C language.
No they are not - but they did appear as extensions in every
8086/80286 compiler I ever used.
AIR you could compile for 'small model' or 'large model'
There were several models available (tiny had one segment shared for code and data, small had separate code and data then there were mixed small code large data and vice versa). These set the memory mapping and the
default sizes of pointers but the near and far keywords could override
those defaults in some models.
It was *horrible*, the 80386 was a breath of fresh air.
Not if you were stuck programming for MS-DOS or Windows, unfortunately.
It was *horrible*, the 80386 was a breath of fresh air.
On Fri, 23 Aug 2024 11:12:41 +0100, Ahem A Rivet's Shot wrote:
It was *horrible*, the 80386 was a breath of fresh air.
Sure it was. Except it was only giving the Intel market what the users of other CPUs (Motorola, NatSemi) had been enjoying for years.
On Fri, 23 Aug 2024 17:49:42 GMT
Charlie Gibbs <cgibbs@kltpzyxm.invalid> wrote:
Not if you were stuck programming for MS-DOS or Windows, unfortunately.
I briefly programmed for MS-DOS at the same time as XENIX but I
have managed to avoid ever writing anything for Windows - I've almost
managed to avoid ever having to even use Windows but Dell scupppered that record by buying a PPOE, putting the B in org and making us all use Dells running Windows - so I left.
Sadly my employers have customers (mainly in Asia) that
insist we provide the same software for both Linux and Windows. Quite
often we have issues making stuff as reliable on Windows as on Linux and
then we find that the customers ran up the Windows version once to check
it worked and then use the Linux versions. But they *must* have a
Windows version.
On Sat, 24 Aug 2024 07:31:32 -0000 (UTC) Lawrence D'Oliveiro
<ldo@nz.invalid> wrote:
On Fri, 23 Aug 2024 11:12:41 +0100, Ahem A Rivet's Shot wrote:
It was *horrible*, the 80386 was a breath of fresh air.
Sure it was. Except it was only giving the Intel market what the users
of other CPUs (Motorola, NatSemi) had been enjoying for years.
Sure there's always plenty of fresh air in the fields but this was
in the sewer.
Windows programs still use “Win32”.
On 26/08/2024 03:43, Lawrence D'Oliveiro wrote:
Windows programs still use “Win32”.
Win32 is the name of the API.
On Mon, 26 Aug 2024 15:06:52 +0100, mm0fmf wrote:
On 26/08/2024 03:43, Lawrence D'Oliveiro wrote:
Windows programs still use “Win32”.
Win32 is the name of the API.
Why is it not “Win64”?
On 27/08/2024 05:31, Lawrence D'Oliveiro wrote:
On Mon, 26 Aug 2024 15:06:52 +0100, mm0fmf wrote:
On 26/08/2024 03:43, Lawrence D'Oliveiro wrote:
Windows programs still use “Win32”.
Win32 is the name of the API.
Why is it not “Win64”?
The 64bit version is the same API compiled for 64bit instead of 32bit.
On 28/08/2024 00:30, Lawrence D'Oliveiro wrote:
On Tue, 27 Aug 2024 08:36:18 +0100, mm0fmf wrote:
On 27/08/2024 05:31, Lawrence D'Oliveiro wrote:
On Mon, 26 Aug 2024 15:06:52 +0100, mm0fmf wrote:
Win32 is the name of the API.
Why is it not “Win64”?
The 64bit version is the same API compiled for 64bit instead of 32bit.
That’s the trouble. It hasn’t really adapted to the availability as
standard of 64-bit integers, for example.
Compare the POSIX APIs, where they were careful to use generic types like
“size_t” and “time_t”, so that the same code could be compiled,
unchanged,
to work on both 32-bit and 64-bit architectures. Not something Windows
code can manage.
You can do this on Windows too, but they had to bastardise their C
compiler for people that hadn't. It's the only one that on a 64 bit
platform that has long as 32 bits.
Windows:-
int=32 bits, long=32 bits, long long=64 bits
Everyone else in the bloody world:-
int=32 bits, long=64 bits, long long=64 bits
---druck
You can do this on Windows too, but they had to bastardise their C
compiler for people that hadn't. It's the only one that on a 64 bit
platform that has long as 32 bits.
Windows:-
int=32 bits, long=32 bits, long long=64 bits
Everyone else in the bloody world:-
int=32 bits, long=64 bits, long long=64 bits
On Thu, 29 Aug 2024 09:32:49 +0100
Richard Kettlewell <invalid@invalid.invalid> wrote:
I don’t think I’d fault either decision though the fact that we’ve ended
up with two conventions does make writing/maintaining portable code a
bit more annoying,
Portable code should only rely on the standards not
implementations, some very weird possibilities are legal within the
standard.
There are always the int<n>_t types for when size matters.
I don’t think I’d fault either decision though the fact that we’ve ended
up with two conventions does make writing/maintaining portable code a
bit more annoying,
druck <news@druck.org.uk> writes:
You can do this on Windows too, but they had to bastardise their C
compiler for people that hadn't. It's the only one that on a 64 bit
platform that has long as 32 bits.
Windows:-
int=32 bits, long=32 bits, long long=64 bits
Everyone else in the bloody world:-
(Almost everyone; Cray had 64-bit int.)
int=32 bits, long=64 bits, long long=64 bits
THe Windows approach is well within what the C language spec allows, and simplified the adaptation of existing Windows application code to 64-bit platforms. The equivalent exercise in Linux needed attention to anything
that made (sometimes invisible) assumptions about the definition of
long.
I don’t think I’d fault either decision though the fact that we’ve ended
up with two conventions does make writing/maintaining portable code a
bit more annoying, though not really any more so than the slightly
different set of things compilers warn about or the lack of GCC
compatibility from MSVC. I think MS should bow the inevitable and
replace cl with Clang.
Richard Kettlewell <invalid@invalid.invalid> wrote:
I don’t think I’d fault either decision though the fact that we’ve
ended up with two conventions does make writing/maintaining portable
code a bit more annoying, though not really any more so than the slightly
different set of things compilers warn about or the lack of GCC
compatibility from MSVC. I think MS should bow the inevitable and
replace cl with Clang.
Portable code should only rely on the standards not implementations,
some very weird possibilities are legal within the standard.
There are always the int<n>_t types for when size matters.
Ahem A Rivet's Shot <steveo@eircom.net> writes:
There are always the int<n>_t types for when size matters.
Life is not always that simple and declaring how things ‘should’ be does not fix a single line of code.
One of the public APIs we support largely uses ‘long’ and ‘unsigned long’ for integral values, which causes occasional issues with our cross-platform code. For example ‘unsigned long’ has the same size as ‘size_t’ on Linux, but not on 64-bit Windows.
Richard Kettlewell <invalid@invalid.invalid> wrote:
Ahem A Rivet's Shot <steveo@eircom.net> writes:
There are always the int<n>_t types for when size matters.
Life is not always that simple and declaring how things ‘should’ be
does not fix a single line of code.
Very true - horse, water, drink.
One of the public APIs we support largely uses ‘long’ and ‘unsigned
long’ for integral values, which causes occasional issues with our
cross-platform code. For example ‘unsigned long’ has the same size as
‘size_t’ on Linux, but not on 64-bit Windows.
Which is why putting assigning the value of a size_t to an unsigned
long or vice-versa is wrong.
No, it’s not necessarily wrong. If the value fits in the destination
type there’s nothing wrong with it. The results are well-defined and do
not change the value. You can look up the rules in the C standard.
On Thu, 29 Aug 2024 09:32:49 +0100
Richard Kettlewell <invalid@invalid.invalid> wrote:
I don?t think I?d fault either decision though the fact that we?ve ended
up with two conventions does make writing/maintaining portable code a
bit more annoying,
Portable code should only rely on the standards not
implementations, some very weird possibilities are legal within the
standard.
On Thu, 29 Aug 2024 18:57:57 +0100
John Aldridge <jpsa@cantab.net> wrote:
In article <20240829102839.5bb67af25e568ebabc65ede6@eircom.net>,
steveo@eircom.net says...
On Thu, 29 Aug 2024 09:32:49 +0100
Richard Kettlewell <invalid@invalid.invalid> wrote:
I don?t think I?d fault either decision though the fact that we?ve
ended up with two conventions does make writing/maintaining portable
code a bit more annoying,
Portable code should only rely on the standards not
implementations, some very weird possibilities are legal within the
standard.
Heh, yes. I worked for several years on a machine where a null pointer
wasn't all bits zero, and where char* was a different size to any other
pointer.
That rings vague bells, what was it ?
In article <20240829102839.5bb67af25e568ebabc65ede6@eircom.net>, steveo@eircom.net says...
On Thu, 29 Aug 2024 09:32:49 +0100
Richard Kettlewell <invalid@invalid.invalid> wrote:
I don?t think I?d fault either decision though the fact that we?ve
ended up with two conventions does make writing/maintaining portable
code a bit more annoying,
Portable code should only rely on the standards not
implementations, some very weird possibilities are legal within the standard.
Heh, yes. I worked for several years on a machine where a null pointer
wasn't all bits zero, and where char* was a different size to any other pointer.
Richard Kettlewell <invalid@invalid.invalid> wrote:
No, it’s not necessarily wrong. If the value fits in the destination
type there’s nothing wrong with it. The results are well-defined and do
not change the value. You can look up the rules in the C standard.
What is wrong is making assumptions about the relative size of long
and size_t - AFAIK the standard makes no guarantees about that.
Note that it's only "wrong" if you care about portability - long
experience suggests that not caring about portability is a good way to
get bitten on the arse.
24 bit pointers were I think quite common, but isn't the 'null
pointer' *defined* to be (Char *)0 ?
Otherwise how could you test for it?
Most code that cares seems to use things like int_32 or long_64 where it matters and macro expand that on a per target hardware basis
There are always the int<n>_t types for when size matters.
Life is not always that simple and declaring how things ‘should’ be does not fix a single line of code.
On 2024-08-29, Richard Kettlewell <invalid@invalid.invalid> wrote:
There are always the int<n>_t types for when size matters.
Life is not always that simple and declaring how things ‘should’ be does >> not fix a single line of code.
What a mess - Much simpler on Fortran, you just need to remember
which variable name spellings are floats.
The Natural Philosopher <tnp@invalid.invalid> writes:
24 bit pointers were I think quite common, but isn't the 'null
pointer' *defined* to be (Char *)0 ?
Otherwise how could you test for it?
The null pointer you get from (char *)0 (or similar constructions)
doesn’t have to be all bits 0.
https://c-faq.com/null/index.html
Portable code should only rely on the standards not
implementations, some very weird possibilities are legal within the standard.
Heh, yes. I worked for several years on a machine where a null pointer wasn't all bits zero, and where char* was a different size to any other pointer.
That rings vague bells, what was it ?
In article <20240829191334.570e88c7507598ffe5b28d87@eircom.net>, steveo@eircom.net says...
Portable code should only rely on the standards not implementations, some very weird possibilities are legal within the standard.
Heh, yes. I worked for several years on a machine where a null
pointer wasn't all bits zero, and where char* was a different size to
any other pointer.
That rings vague bells, what was it ?
Prime. It was word, not byte, addressed, so a char* had to be bigger.
On 30/08/2024 14:28, John Aldridge wrote:
In article <20240829191334.570e88c7507598ffe5b28d87@eircom.net>,I used a Prime750 at Uni. But only undergrad tasks in Prime BASIC and
steveo@eircom.net says...
Portable code should only rely on the standards not
implementations, some very weird possibilities are legal within the
standard.
Heh, yes. I worked for several years on a machine where a null pointer >>>> wasn't all bits zero, and where char* was a different size to any other >>>> pointer.
That rings vague bells, what was it ?
Prime. It was word, not byte, addressed, so a char* had to be bigger.
some Fortran. It seemed quite fast at the time in timeshare mode with
plenty of undergrads using it. But the CPU was only as fast as an 8MHz
68000!
In article <20240829191334.570e88c7507598ffe5b28d87@eircom.net>, steveo@eircom.net says...
Portable code should only rely on the standards not
implementations, some very weird possibilities are legal within the
standard.
Heh, yes. I worked for several years on a machine where a null pointer
wasn't all bits zero, and where char* was a different size to any other
pointer.
That rings vague bells, what was it ?
Prime. It was word, not byte, addressed, so a char* had to be bigger.
On 30/08/2024 15:39, mm0fmf wrote:
On 30/08/2024 14:28, John Aldridge wrote:That is the staggering thing. CPU performance in the mini era wasn't
In article <20240829191334.570e88c7507598ffe5b28d87@eircom.net>,I used a Prime750 at Uni. But only undergrad tasks in Prime BASIC and
steveo@eircom.net says...
Portable code should only rely on the standards not
implementations, some very weird possibilities are legal within the >>>>>> standard.
Heh, yes. I worked for several years on a machine where a null pointer >>>>> wasn't all bits zero, and where char* was a different size to any
other
pointer.
That rings vague bells, what was it ?
Prime. It was word, not byte, addressed, so a char* had to be bigger.
some Fortran. It seemed quite fast at the time in timeshare mode with
plenty of undergrads using it. But the CPU was only as fast as an 8MHz
68000!
that hot at all.
I see someone has made a Pi PICO emulate a range of 6502 based computers
- apple II etc.
I am fairly sure a PI Zero could outperform a 386 running SCO Unix...and
that was pretty comparable with - if not better than - a PDP 11.
In article <20240829102839.5bb67af25e568ebabc65ede6@eircom.net>, steveo@eircom.net says...
On Thu, 29 Aug 2024 09:32:49 +0100
Richard Kettlewell <invalid@invalid.invalid> wrote:
I don?t think I?d fault either decision though the fact that we?ve ended >>> up with two conventions does make writing/maintaining portable code a
bit more annoying,
Portable code should only rely on the standards not
implementations, some very weird possibilities are legal within the
standard.
Heh, yes. I worked for several years on a machine where a null pointer
wasn't all bits zero, and where char* was a different size to any other pointer.
I used a Prime750 at Uni. But only undergrad tasks in Prime BASIC and
some Fortran. It seemed quite fast at the time in timeshare mode with
plenty of undergrads using it. But the CPU was only as fast as an 8MHz
68000!
On 30/08/2024 15:39, mm0fmf wrote:
On 30/08/2024 14:28, John Aldridge wrote:That is the staggering thing. CPU performance in the mini era wasn't
In article <20240829191334.570e88c7507598ffe5b28d87@eircom.net>,I used a Prime750 at Uni. But only undergrad tasks in Prime BASIC and
steveo@eircom.net says...
Portable code should only rely on the standards not
implementations, some very weird possibilities are legal within the >>>>>> standard.
Heh, yes. I worked for several years on a machine where a null pointer >>>>> wasn't all bits zero, and where char* was a different size to any
other
pointer.
That rings vague bells, what was it ?
Prime. It was word, not byte, addressed, so a char* had to be bigger.
some Fortran. It seemed quite fast at the time in timeshare mode with
plenty of undergrads using it. But the CPU was only as fast as an 8MHz
68000!
that hot at all.
I see someone has made a Pi PICO emulate a range of 6502 based computers
- apple II etc.
I am fairly sure a PI Zero could outperform a 386 running SCO Unix...and
that was pretty comparable with - if not better than - a PDP 11.
On 30/08/2024 15:45, The Natural Philosopher wrote:
On 30/08/2024 15:39, mm0fmf wrote:
On 30/08/2024 14:28, John Aldridge wrote:That is the staggering thing. CPU performance in the mini era wasn't
In article <20240829191334.570e88c7507598ffe5b28d87@eircom.net>,I used a Prime750 at Uni. But only undergrad tasks in Prime BASIC and
steveo@eircom.net says...
Portable code should only rely on the standards not
implementations, some very weird possibilities are legal within the >>>>>>> standard.
Heh, yes. I worked for several years on a machine where a null
pointer
wasn't all bits zero, and where char* was a different size to any
other
pointer.
That rings vague bells, what was it ?
Prime. It was word, not byte, addressed, so a char* had to be bigger.
some Fortran. It seemed quite fast at the time in timeshare mode with
plenty of undergrads using it. But the CPU was only as fast as an
8MHz 68000!
that hot at all.
I see someone has made a Pi PICO emulate a range of 6502 based
computers - apple II etc.
I am fairly sure a PI Zero could outperform a 386 running SCO
Unix...and that was pretty comparable with - if not better than - a
PDP 11.
The CPUs may not have had stunning performance but were generally quite
a bit quicker than the Z80/6502s of the day. The real performance came
from having disks and ISTR hardware assisted IO. i.e. the CPU didn't
have to poll or handle IRQs from each UART but there was something
helping. It's all so long ago now I forget the details. What I do
remember was it was around 1985 when someone lit the blue touch paper
and the performance of micros started rocketing. Though if you started
10 years before me there will have been something that was when
performance took off for you. I think everyone has some point in their
memory when things started to go whoosh!
In 1989 I was writing Z80 assembler to control medical gear. All the
code took about 45mins to cross assemble and link on a Unix system
running on a Vax 11/730. In 1990 we got a 25MHz 80386 running DOS and
the same source took under 3mins to cross assemble and link. The
bottleneck went from the time to build the code to the time to erase, download and burn the EPROMS.
On 30/08/2024 20:50, mm0fmf wrote:
On 30/08/2024 15:45, The Natural Philosopher wrote:Yes. I was writing C and assembler for a 6809 cross complied on a PDP/11.
On 30/08/2024 15:39, mm0fmf wrote:
On 30/08/2024 14:28, John Aldridge wrote:That is the staggering thing. CPU performance in the mini era wasn't
In article <20240829191334.570e88c7507598ffe5b28d87@eircom.net>,I used a Prime750 at Uni. But only undergrad tasks in Prime BASIC
steveo@eircom.net says...
Portable code should only rely on the standards not
implementations, some very weird possibilities are legal within the >>>>>>>> standard.
Heh, yes. I worked for several years on a machine where a null
pointer
wasn't all bits zero, and where char* was a different size to any >>>>>>> other
pointer.
That rings vague bells, what was it ?
Prime. It was word, not byte, addressed, so a char* had to be bigger. >>>>>
and some Fortran. It seemed quite fast at the time in timeshare mode
with plenty of undergrads using it. But the CPU was only as fast as
an 8MHz 68000!
that hot at all.
I see someone has made a Pi PICO emulate a range of 6502 based
computers - apple II etc.
I am fairly sure a PI Zero could outperform a 386 running SCO
Unix...and that was pretty comparable with - if not better than - a
PDP 11.
The CPUs may not have had stunning performance but were generally
quite a bit quicker than the Z80/6502s of the day. The real
performance came from having disks and ISTR hardware assisted IO. i.e.
the CPU didn't have to poll or handle IRQs from each UART but there
was something helping. It's all so long ago now I forget the details.
What I do remember was it was around 1985 when someone lit the blue
touch paper and the performance of micros started rocketing. Though
if you started 10 years before me there will have been something that
was when performance took off for you. I think everyone has some point
in their memory when things started to go whoosh!
In 1989 I was writing Z80 assembler to control medical gear. All the
code took about 45mins to cross assemble and link on a Unix system
running on a Vax 11/730. In 1990 we got a 25MHz 80386 running DOS and
the same source took under 3mins to cross assemble and link. The
bottleneck went from the time to build the code to the time to erase,
download and burn the EPROMS.
We had PCS as serial terminals and text editors.
Compile was very slow compared to on a PC.
The thing was that until the 386 Intel CPUs didn't have the big boy features. After that they did.
Even an old IBM mainframe could be emulated under AIX on a PC.
I did some work on a Vax running Unix too. Better, but still pretty awful
On 30/08/2024 22:53, The Natural Philosopher wrote:
On 30/08/2024 20:50, mm0fmf wrote:
On 30/08/2024 15:45, The Natural Philosopher wrote:Yes. I was writing C and assembler for a 6809 cross complied on a PDP/11.
On 30/08/2024 15:39, mm0fmf wrote:
On 30/08/2024 14:28, John Aldridge wrote:That is the staggering thing. CPU performance in the mini era wasn't
In article <20240829191334.570e88c7507598ffe5b28d87@eircom.net>,I used a Prime750 at Uni. But only undergrad tasks in Prime BASIC
steveo@eircom.net says...
Portable code should only rely on the standards not
implementations, some very weird possibilities are legal within >>>>>>>>> the
standard.
Heh, yes. I worked for several years on a machine where a null >>>>>>>> pointer
wasn't all bits zero, and where char* was a different size to
any other
pointer.
That rings vague bells, what was it ?
Prime. It was word, not byte, addressed, so a char* had to be bigger. >>>>>>
and some Fortran. It seemed quite fast at the time in timeshare
mode with plenty of undergrads using it. But the CPU was only as
fast as an 8MHz 68000!
that hot at all.
I see someone has made a Pi PICO emulate a range of 6502 based
computers - apple II etc.
I am fairly sure a PI Zero could outperform a 386 running SCO
Unix...and that was pretty comparable with - if not better than - a
PDP 11.
The CPUs may not have had stunning performance but were generally
quite a bit quicker than the Z80/6502s of the day. The real
performance came from having disks and ISTR hardware assisted IO.
i.e. the CPU didn't have to poll or handle IRQs from each UART but
there was something helping. It's all so long ago now I forget the
details. What I do remember was it was around 1985 when someone lit
the blue touch paper and the performance of micros started
rocketing. Though if you started 10 years before me there will have
been something that was when performance took off for you. I think
everyone has some point in their memory when things started to go
whoosh!
In 1989 I was writing Z80 assembler to control medical gear. All the
code took about 45mins to cross assemble and link on a Unix system
running on a Vax 11/730. In 1990 we got a 25MHz 80386 running DOS and
the same source took under 3mins to cross assemble and link. The
bottleneck went from the time to build the code to the time to erase,
download and burn the EPROMS.
We had PCS as serial terminals and text editors.
Compile was very slow compared to on a PC.
The thing was that until the 386 Intel CPUs didn't have the big boy
features. After that they did.
Even an old IBM mainframe could be emulated under AIX on a PC.
I did some work on a Vax running Unix too. Better, but still pretty awful
Vaxen were much better running VMS!
Yes stdint.h is your friend
On 28/08/2024 00:30, Lawrence D'Oliveiro wrote:
Compare the POSIX APIs, where they were careful to use generic types
like “size_t” and “time_t”, so that the same code could be compiled, >> unchanged, to work on both 32-bit and 64-bit architectures. Not
something Windows code can manage.
You can do this on Windows too, but they had to bastardise their C
compiler for people that hadn't. It's the only one that on a 64 bit
platform that has long as 32 bits.
C was shit, for not making types explicit ...
On Thu, 29 Aug 2024 21:33:28 +0100, druck wrote:
Yes stdint.h is your friend
Unless you have an elderly code base that still hasn’t caught up with
C99 ...
On 01/09/2024 08:50, Lawrence D'Oliveiro wrote:
On Thu, 29 Aug 2024 21:33:28 +0100, druck wrote:
Yes stdint.h is your friend
Unless you have an elderly code base that still hasn’t caught up with
C99 ...
Or you were programming in C on an Analog Devices SHARC were char was 32 bits.
On Sun, 1 Sep 2024 11:07:17 +0100
mm0fmf <none@invalid.com> wrote:
On 01/09/2024 08:50, Lawrence D'Oliveiro wrote:
On Thu, 29 Aug 2024 21:33:28 +0100, druck wrote:
Yes stdint.h is your friend
Unless you have an elderly code base that still hasn’t caught up with
C99 ...
Or you were programming in C on an Analog Devices SHARC were char was 32
bits.
I'll bet that broke a lot of bad code :)
Stll even in that environment a compliant compiler should still
provide int<n>_t types. They'd probably have to have horrendously
inefficient implementations not dissimilar to the bitfields in structs but they should exist. Woe betide anyone who thought they could put a char into an int16_t safely though.
On Sun, 1 Sep 2024 11:07:17 +0100
mm0fmf <none@invalid.com> wrote:
On 01/09/2024 08:50, Lawrence D'Oliveiro wrote:
On Thu, 29 Aug 2024 21:33:28 +0100, druck wrote:
Yes stdint.h is your friend
Unless you have an elderly code base that still hasn’t caught up with
C99 ...
Or you were programming in C on an Analog Devices SHARC were char was 32
bits.
I'll bet that broke a lot of bad code :)
Stll even in that environment a compliant compiler should still
provide int<n>_t types. They'd probably have to have horrendously
inefficient implementations not dissimilar to the bitfields in structs but they should exist. Woe betide anyone who thought they could put a char into an int16_t safely though.
STR the compiler was a custom version of gcc 1.xx. 26 years ago so
the exact version has evaporated from my memory.
On 2024-08-29, Richard Kettlewell <invalid@invalid.invalid> wrote:
There are always the int<n>_t types for when size matters.
Life is not always that simple and declaring how things ‘should’ be
does not fix a single line of code.
What a mess - Much simpler on Fortran, you just need to remember which variable name spellings are floats.
Or you were programming in C on an Analog Devices SHARC were char was 32 bits.
On Sun, 2024-09-01 at 15:47 +0100, mm0fmf wrote:
STR the compiler was a custom version of gcc 1.xx. 26 years ago so
the exact version has evaporated from my memory.
Your dates are slightly off. These gcc 1.x versions were between 1987 -
1993. I do remember using 2.7.2.3 with Linux 2.0 in 1997.
However, the SHARC stuff was definitely derived from gcc 1.x in 1998 as
it struck me as very old as 2.x had been out for a while by then.
The thing was that until the 386 Intel CPUs didn't have the big boy
features. After that they did.
Even an old IBM mainframe could be emulated under AIX on a PC.
On 01/09/2024 21:36, Single Stage to Orbit wrote:
On Sun, 2024-09-01 at 15:47 +0100, mm0fmf wrote:
STR the compiler was a custom version of gcc 1.xx. 26 years ago
so the exact version has evaporated from my memory.
Your dates are slightly off. These gcc 1.x versions were between
1987 - 1993. I do remember using 2.7.2.3 with Linux 2.0 in 1997.
I'd agree that gcc main line release would be around 2.7 in 1998. I
can remember I started writing software for a Strongarm based video
security system in 2001. By then ARM kernels were compiled/cross-
compiled with gcc 2.95.x. Then gcc 3.0 / 3.1 came out which was
better for userland code but kernels compiled with it would not run
so we stayed with 2.95 for some time.
However, the SHARC stuff was definitely derived from gcc 1.x in 1998
as it struck me as very old as 2.x had been out for a while by then.
This was the version of the compiler that ran on Windows. ISTR I used
a PII 200MHz probably running WinNT to develop on. The Unix systems
were reserved for ADA, Occam and other esoteric stuff ;-)
By then ARM kernels were compiled/cross-compiled with
gcc 2.95.x. Then gcc 3.0 / 3.1 came out which was better for userland
code but kernels compiled with it would not run so we stayed with 2.95
for some time.
The 386 slaughtered most of the Unix Minis of the time.
The PDP 11 was already a legacy predecessor of the Vax, did they even
have demand paging?
Sysop: | Weed Hopper |
---|---|
Location: | Clearwater, FL |
Users: | 14 |
Nodes: | 6 (0 / 6) |
Uptime: | 231:44:51 |
Calls: | 55 |
Calls today: | 1 |
Files: | 50,127 |
D/L today: |
34 files (4,425K bytes) |
Messages: | 275,362 |