Bookworm: Pi 4B
Last night in an idle moment I decided to perform a routine upgrade.
This involved the kernel.
Unfortunately after ten minutes the upgrade had not installed and after terminating it the whole machine would no longer boot.
I downloaded the latest image of Raspios, and that booted, but the
kernel panicked halfway through.
I went back to the original image, installed that and it finally booted again.
When I went to upgrade, a lot of packages were 'held back'. I guess
someone discovered the upgrade was a bricker.
Bastards. I was up all night rebuilding that Pi..
The Natural Philosopher <tnp@invalid.invalid> wrote:
Bookworm: Pi 4BI'm running Bookworm on two Pi4s without any issues.
Last night in an idle moment I decided to perform a routine upgrade.
This involved the kernel.
Unfortunately after ten minutes the upgrade had not installed and after
terminating it the whole machine would no longer boot.
I downloaded the latest image of Raspios, and that booted, but the
kernel panicked halfway through.
I went back to the original image, installed that and it finally booted
again.
When I went to upgrade, a lot of packages were 'held back'. I guess
someone discovered the upgrade was a bricker.
Bastards. I was up all night rebuilding that Pi..
On 19/12/2023 13:35, Chris Green wrote:
The Natural Philosopher <tnp@invalid.invalid> wrote:The point is that about 4 hrs after all this happened the upgrade on the
Bookworm: Pi 4BI'm running Bookworm on two Pi4s without any issues.
Last night in an idle moment I decided to perform a routine upgrade.
This involved the kernel.
Unfortunately after ten minutes the upgrade had not installed and after
terminating it the whole machine would no longer boot.
I downloaded the latest image of Raspios, and that booted, but the
kernel panicked halfway through.
I went back to the original image, installed that and it finally booted
again.
When I went to upgrade, a lot of packages were 'held back'. I guess
someone discovered the upgrade was a bricker.
Bastards. I was up all night rebuilding that Pi..
new install held back the kernel....upgrade
So I just got caught. Between a bad upgrade and the powers that be
blocking it.
And the latest bookworm image didn't boot either.
So something rotten in the state of Denmark, to be sure. So take care. I
am sure it will all be fixed soon
Bookworm: Pi 4B
Last night in an idle moment I decided to perform a routine
upgrade. This involved the kernel.
Unfortunately after ten minutes the upgrade had not installed and
after terminating it the whole machine would no longer boot.
I downloaded the latest image of Raspios, and that booted, but the
kernel panicked halfway through.
I went back to the original image, installed that and it finally
booted again.
When I went to upgrade, a lot of packages were 'held back'. I guess
someone discovered the upgrade was a bricker.
I don't know about the latest one, but this is staying on all my Pis
until I find out otherwise. Turning power saving off greatly reduces the latency on all my monitoring tasks.
Create /etc/udev/rules.d/71-wifi_power_save_off.rules containing the following single line:-
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="brcmfmac", KERNEL=="wlan0", RUN="/sbin/iw dev wlan0 set power_save off"
---druck
The Natural Philosopher <tnp@invalid.invalid> writes:
Bookworm: Pi 4B
Last night in an idle moment I decided to perform a routine
upgrade. This involved the kernel.
Unfortunately after ten minutes the upgrade had not installed and
after terminating it the whole machine would no longer boot.
I downloaded the latest image of Raspios, and that booted, but the
kernel panicked halfway through.
I went back to the original image, installed that and it finally
booted again.
When I went to upgrade, a lot of packages were 'held back'. I guess
someone discovered the upgrade was a bricker.
Packages being _held_ back means the local administrator has blocked
their upgrade with ‘apt-mark hold’ (or equivalent).
Packages being _kept_ back means that a package can’t be upgraded due to
a dependency issue (e.g. because the administrator only asked for a
partial upgrade, but the new version of the package depends on a package
that isn’t installed).
Neither reflects any kind of upstream decision to block upgrades.
The Natural Philosopher wrote:
The point is that about 4 hrs after all this happened the upgrade on the
new install held back the kernel....upgrade
So I just got caught. Between a bad upgrade and the powers that be
blocking it.
Not sure if you are aware of the upgrade process
Usually upgrade should be done update, upgrade followed by dist-upgrade or just update + full-upgrade. This way the base packets get upgraded too.
The Natural Philosopher wrote:??? Sorry...English pliz!
But with induction, one never knows the truth. It is always speculation.
No speculation in following
1. you never break an upgrade (problems preprogrammed)
2. when you break an upgrade you never reboot (even more problems preprogrammed)
you should have solved the issues beforehand and then reboot
I am surprised you expected any other outcome. Better think of rescue strategy to avoid such outcomes - i.e. backup/restore or clone to another
SD and try there first. I do it with borg and for the PI it is diskless anyway.
you should have solved the issues beforehand and then reboot
I am surprised you expected any other outcome.
On 20 Dec 2023 at 08:55:14 GMT, "Deloptes" <deloptes@gmail.com> wrote:
you should have solved the issues beforehand and then reboot
There were no problems to be solved. Later, there were. But it was too late by
then.
I am surprised you expected any other outcome.
You mean, whenever an update/upgrade is done, you expect the computer to be broken afterwards?
Packages being _held_ back means the local administrator has blocked
their upgrade with ‘apt-mark hold’ (or equivalent).
Packages being _kept_ back means that a package can’t be upgraded due to
a dependency issue (e.g. because the administrator only asked for a
partial upgrade, but the new version of the package depends on a package
that isn’t installed).
Neither reflects any kind of upstream decision to block upgrades.
On 20/12/2023 12:30, TimS wrote:
On 20 Dec 2023 at 08:55:14 GMT, "Deloptes" <deloptes@gmail.com> wrote:LOL! The reason why I didnt is because it is only the second time its
you should have solved the issues beforehand and then reboot
There were no problems to be solved. Later, there were. But it was too late by
then.
I am surprised you expected any other outcome.
You mean, whenever an update/upgrade is done, you expect the computer to be >> broken afterwards?
ever happened with linux, and the first time wasn't critical. The wifi stopped working, that's all. Simply went back to previous kernel..
Linux is generally really really good about upgrades. Way better than
Apple or Microsoft.
Linux is generally really really good about upgrades. Way better than
Apple or Microsoft.
Richard Kettlewell <invalid@invalid.invalid> wrote:
Packages being _held_ back means the local administrator has blocked
their upgrade with ‘apt-mark hold’ (or equivalent).
Packages being _kept_ back means that a package can’t be upgraded due to >> a dependency issue (e.g. because the administrator only asked for a
partial upgrade, but the new version of the package depends on a package
that isn’t installed).
Neither reflects any kind of upstream decision to block upgrades.
Ubuntu is using 'held back' for phased updates: https://ubuntu.com/server/docs/about-apt-upgrade-and-phased-updates
Are Debian or Raspberry Pi OS also using that mechanism?
Microsoft, perhaps, given the posts I've seen here about needing to take a backup first or of Windows erasing the disk first before doing the upgrade.
Linux is generally really really good about upgrades. Way better than
Apple or Microsoft.
Bookworm: Pi 4B
Last night in an idle moment I decided to perform a routine upgrade.
This involved the kernel.
Unfortunately after ten minutes the upgrade had not installed and after terminating it the whole machine would no longer boot.
I downloaded the latest image of Raspios, and that booted, but the
kernel panicked halfway through.
I went back to the original image, installed that and it finally booted again.
When I went to upgrade, a lot of packages were 'held back'. I guess
someone discovered the upgrade was a bricker.
Bastards. I was up all night rebuilding that Pi..
On 20/12/2023 13:46, TimS wrote:
Microsoft, perhaps, given the posts I've seen here about needing to take a >> backup first or of Windows erasing the disk first before doing the upgrade.
As usual, you hear the posts about the problems, nothing from the very many users for whom updates are without issue. Backup up regularly is good practice
with any OS.
On 19/12/2023 12:52, The Natural Philosopher wrote:
Bookworm: Pi 4B
Last night in an idle moment I decided to perform a routine upgrade.
This involved the kernel.
Unfortunately after ten minutes the upgrade had not installed and after terminating it the whole machine would no longer boot.
I downloaded the latest image of Raspios, and that booted, but the
kernel panicked halfway through.
I went back to the original image, installed that and it finally booted again.
When I went to upgrade, a lot of packages were 'held back'. I guess
someone discovered the upgrade was a bricker.
Bastards. I was up all night rebuilding that Pi..
Is this just 64bit Raspios stuff? I have 32bit Raspios Lite on a
PiZeroW and have not had any of these problems. Currently Bookworm 12.1
and Kernel 6.1.21+
I didn't say I didn't back up regularly. I said I didn't do a special backup just because I'm updating the machine.
On 20 Dec 2023 at 15:11:23 GMT, "David Taylor" <david-taylor@blueyonder.co.uk.invalid> wrote:
On 20/12/2023 13:46, TimS wrote:
Microsoft, perhaps, given the posts I've seen here about needing to take a >>> backup first or of Windows erasing the disk first before doing the upgrade. >>As usual, you hear the posts about the problems, nothing from the very
many users for whom updates are without issue. Backup up regularly is
good practice with any OS.
I didn't say I didn't back up regularly. I said I didn't do a special backup just because I'm updating the machine.
I didn't say I didn't back up regularly. I said I didn't do a special backup just because I'm updating the machine.
On 20/12/2023 12:30, TimS wrote:
LOL! The reason why I didnt is because it is only the second time its
You mean, whenever an update/upgrade is done, you expect the computer to be >> broken afterwards?
ever happened with linux, and the first time wasn't critical. The wifi stopped working, that's all. Simply went back to previous kernel..
Ubuntu is using 'held back' for phased updates: https://ubuntu.com/server/docs/about-apt-upgrade-and-phased-updates
Are Debian or Raspberry Pi OS also using that mechanism?
On 20/12/2023 16:08, TimS wrote:
I didn't say I didn't back up regularly. I said I didn't do a special backup >> just because I'm updating the machine.
It's a very good idea to do one before updating, as it involves an awful
lot of writing to the SD card, and if it's getting close to its wear
level limits, this could push it over into failure.
On 20 Dec 2023 at 21:08:31 GMT, "druck" <news@druck.org.uk> wrote:
On 20/12/2023 16:08, TimS wrote:
I didn't say I didn't back up regularly. I said I didn't do a special backup
just because I'm updating the machine.
It's a very good idea to do one before updating, as it involves an awful
lot of writing to the SD card, and if it's getting close to its wear
level limits, this could push it over into failure.
I was responding to a comment that macOS upgrades were sub-optimal, not talking about Pi upgrades.
"Mac-OS" is, underneath, BSD. As such it gains, or loses,
from the same issues as mainstream Linux/Unix insofar as
'packages/upgrades' go.
In article (Dans l'article) <uluoh7$ibt0$7@dont-email.me>, The Natural Philosopher <tnp@invalid.invalid> wrote (écrivait) :
Linux is generally really really good about upgrades. Way better than
Apple or Microsoft.
Do you have experience with this type of material?
Theo <theom+news@chiark.greenend.org.uk> writes:
Richard Kettlewell <invalid@invalid.invalid> wrote:
Packages being _held_ back means the local administrator has blocked
their upgrade with ‘apt-mark hold’ (or equivalent).
Packages being _kept_ back means that a package can’t be upgraded due to >>> a dependency issue (e.g. because the administrator only asked for a
partial upgrade, but the new version of the package depends on a package >>> that isn’t installed).
Neither reflects any kind of upstream decision to block upgrades.
Ubuntu is using 'held back' for phased updates:
https://ubuntu.com/server/docs/about-apt-upgrade-and-phased-updates
Are Debian or Raspberry Pi OS also using that mechanism?
Oh, that’s new. Confusing that they’ve used the same diagnostic. They could in principle use it, but none of the packages files I can see
contain the header for it which suggests they aren’t. But I don’t know where TNP is getting his packages from.
I think if you stay on the One True Apple path then things are probably
OK, just don't install any third party apps....
On Thu, 21 Dec 2023 10:15:53 +0000
The Natural Philosopher <tnp@invalid.invalid> wrote:
I think if you stay on the One True Apple path then things are probably
OK, just don't install any third party apps....
I believe you are correct in this, I have never seen any issues
when following the OTAP but third party stuff is only tolerated not
supported and it shows.
On 20 Dec 2023 at 21:08:31 GMT, "druck" <news@druck.org.uk> wrote:
On 20/12/2023 16:08, TimS wrote:
I didn't say I didn't back up regularly. I said I didn't do a special backup
just because I'm updating the machine.
It's a very good idea to do one before updating, as it involves an awful
lot of writing to the SD card, and if it's getting close to its wear
level limits, this could push it over into failure.
I was responding to a comment that macOS upgrades were sub-optimal, not talking about Pi upgrades.
On Thu, 21 Dec 2023 02:22:31 -0500
"56g.1183" <56g.1183@ztq4.net> wrote:
"Mac-OS" is, underneath, BSD. As such it gains, or loses,
from the same issues as mainstream Linux/Unix insofar as
'packages/upgrades' go.
Not really MacOS is the Mach kernel, a POSIX userland derived
mostly from FreeBSD, a proprietary GUI and package system.
There is a lot of variety among package systems in the unix world.
They are not all equal.
Some package systems such as MacOS and FreeBSD use a central build system that ensures consistency across the entire package set. Others take contributed builds that do not.
There are rumors that Winders is headed in the same direction
because what-is has become just a total Gordian Knot and they
can't maintain it or predict interactions.
Part of the
problem is the programmers - they link their code to
VERY VERY specific versions of libs, alas OTHER ones
also do so - and their needs/requirements are not going
to be the same all the time. If you don't have lib-XYZ
version 8.33.04 then they freak.
On 2023-12-22 05:49, 56g.1183 wrote:
Part of the
problem is the programmers - they link their code to
VERY VERY specific versions of libs, alas OTHER ones
also do so - and their needs/requirements are not going
to be the same all the time. If you don't have lib-XYZ
version 8.33.04 then they freak.
How do you do that in practice ?
When I link with say PostgreSQL's libpq
which is the native lib for accessing the database as a client,
I add '-lpq' to the link command.
That is it. How do I tie it to a specific version of the lib?
On 2023-12-22 05:49, 56g.1183 wrote:
There are rumors that Winders is headed in the same direction
because what-is has become just a total Gordian Knot and they
can't maintain it or predict interactions.
Any links to confirm that ?
Part of the
problem is the programmers - they link their code to
VERY VERY specific versions of libs, alas OTHER ones
also do so - and their needs/requirements are not going
to be the same all the time. If you don't have lib-XYZ
version 8.33.04 then they freak.
How do you do that in practice ?
When I link with say PostgreSQL's libpq
which is the native lib for accessing the database as a client,
I add '-lpq' to the link command.
That is it. How do I tie it to a specific version of the lib?
Bj?rn Lundin <bnl@nowhere.com> wrote:
On 2023-12-22 05:49, 56g.1183 wrote:
Part of the
problem is the programmers - they link their code to
VERY VERY specific versions of libs, alas OTHER ones
also do so - and their needs/requirements are not going
to be the same all the time. If you don't have lib-XYZ
version 8.33.04 then they freak.
How do you do that in practice ?
When I link with say PostgreSQL's libpq
which is the native lib for accessing the database as a client,
I add '-lpq' to the link command.
That is it. How do I tie it to a specific version of the lib?
As with all the best options, I can't find it documented in the
GCC manual, but I think it's something like "-l:libxyz.so.1.2.3"
to specify an explicit shared object file to link against.
Ah OK, it's mentioned in the ld(1) man page, but not in the GCC
manual. But gcc will also pass it through to ld, I think.
The complaint probably wasn't really about that though, rather
about new versions of libraries changing their ABI and thereby
forcing users to match library versions to the program they want
to use. Really it's the fault of the library developers more than
the application developers, although it might sometimes be fair to
blame the latter for choosing to use unstable libraries in the
first place. It's hardly a rare problem though, with OpenSSL being
a prime example.
Also the Debian package manager can be unnecessarily picky about
library versions sometimes, which I think is the trouble that a
previous incarnation of 56g.1183 complained about at length once
before.
You're mostly correct here. The prob is that anytime you apt install
anything the source of the desired pgm is compiled on YOUR box.
The dependencies list is part of the info in what you're installing.
On 12/22/23 8:21 AM, Björn Lundin wrote:
On 2023-12-22 05:49, 56g.1183 wrote:
There are rumors that Winders is headed in the same directionAny links to confirm that ?
because what-is has become just a total Gordian Knot and they can't
maintain it or predict interactions.
Not as such ... I just "hear things", 'rumors', as said.
It'd be a big Top Secret at M$ regardless.
However I think it's their ONLY sane path. Apple knew
this some time ago. M$ has been stubborn - but it's
still an increasing functional/security nightmare and
SOMETHING needs to be done.
"56g.1183" <56g.1183@ztq4.net> wrote:
The crux of the proverbial biscuit lies with the practice of
re-compiling apps/updates when you go to install them.
I'll say it again - this does NOT happen. You have thoroughly
misunderstood the root cause of the problem, which is that there the available packages do not form a coherent set in most (all?) Linux
distros. Instead it is easy to find yourself installing package A
which depends on library L < 1.0 and package B which depends on
library L > 2.0.
Windows is not immune to this - in that world it is known as DL-Hell.
The crux of the proverbial biscuit lies with the
practice of re-compiling apps/updates when you go
to install them.
"56g.1183" <56g.1183@ztq4.net> writes:
On 12/22/23 8:21 AM, Björn Lundin wrote:
On 2023-12-22 05:49, 56g.1183 wrote:
There are rumors that Winders is headed in the same directionAny links to confirm that ?
because what-is has become just a total Gordian Knot and they can't
maintain it or predict interactions.
Not as such ... I just "hear things", 'rumors', as said.
It'd be a big Top Secret at M$ regardless.
However I think it's their ONLY sane path. Apple knew
this some time ago. M$ has been stubborn - but it's
still an increasing functional/security nightmare and
SOMETHING needs to be done.
This idea is detached from reality. Microsoft are committed to backward compatibility with existing application software, and maintaining that
in such a fundamental rewrite is not possible.
https://www.hyrumslaw.com/ is relevant here.
Apple have never had the same compatibility goals, and have regularly
killed application software backward compatibility.
On Fri, 22 Dec 2023 23:22:55 -0500
"56g.1183" <56g.1183@ztq4.net> wrote:
The crux of the proverbial biscuit lies with the
practice of re-compiling apps/updates when you go
to install them.
I'll say it again - this does NOT happen.
On 23/12/2023 09:50, Ahem A Rivet's Shot wrote:
On Fri, 22 Dec 2023 23:22:55 -0500
"56g.1183" <56g.1183@ztq4.net> wrote:
The crux of the proverbial biscuit lies with the
practice of re-compiling apps/updates when you go
to install them.
I'll say it again - this does NOT happen.
It sort of does with kernels ... AIUI the modules are linked at least.
Though not necessarily compiled, as such.
The world is changing, and what looks to be relevant is that ARM based hardware is where consumers are going. And you cant have legacy WINTEL
apps on that anyway.
It is not beyond the bounds of credibility for Microsoft to create an
API with exactly the same calls into it as in Windows, and call that Microsoft X Windows or whatever, and *sell* that to run on
Linux. Programs would need relinking with that library in order to
create linux executables, but that wouldn't be a huge issue for most
vendors.
The Natural Philosopher <tnp@invalid.invalid> writes:
It is not beyond the bounds of credibility for Microsoft to create an
API with exactly the same calls into it as in Windows, and call that Microsoft X Windows or whatever, and *sell* that to run on
Linux. Programs would need relinking with that library in order to
create linux executables, but that wouldn't be a huge issue for most vendors.
It’d be a crazy thing to do. Huge amounts of effort to maintain compatibility, compared to the much easier strategy of just keeping the existing Windows codebase trundling along. Which is, demonstrably, what they’re actually doing.
And for what? The end result would be a Windows API implementation on
top of a kernel designed for a completely different application layer.
It is not beyond the bounds of credibility for Microsoft to create an
API with exactly the same calls into it as in Windows, and call that Microsoft X Windows or whatever, and *sell* that to run on Linux.
Programs would need relinking with that library in order to create linux executables, but that wouldn't be a huge issue for most vendors.
And if that relieves Microsoft of a layer of development they really
don't want to do, and allows them to concentrate on what brings in the
money, I don't see why they wouldn't.
What would be the worst issue is hardware drivers which would inevitably
have to change as I don't think a compatibility shim would really work.
Full backwasswards compatibility for .exes is simply down to providing
e.g. Virtualbox as standard equipped with whatever legacy version of
windows people wanted. Until Intel chips become obsolete and the whole
world goes ARM...
The more interesting question is where industrial computing is going.
Apple still hold sway in the graphics and sound processing areas, but
when it comes to CAD CAM I don't think windows really has a rival.
The real issue for me if I was in charge of Microsoft, after the failure
to crack mobile phones or fondleslabs is 'where are we going? What do we
have that is a unique selling point? And how do we leverage that into
future sales?'
Basically if existing applications could be rapidly re-linked, or
recompiled to run on 'Windows on Linux on Intel, with legacy Virtual
box' or 'Windows on Linux, on ARM' I think the apps guys would be happy.
The hardware guys would be happy too, as all of their kit would be obsolete,so they would just bring out Linux drivers for *new* kit.
On 23 Dec 2023 at 12:06:57 GMT, "The Natural Philosopher" <tnp@invalid.invalid> wrote:
The world is changing, and what looks to be relevant is that ARM based
hardware is where consumers are going. And you cant have legacy WINTEL
apps on that anyway.
Sure you can - and do. SWMBO has an M1 Mini - which of course has an ARM CPU. All her old intel-based software runs with no issues. Of course that means not
just having Rosetta, but also all the libs and frameworks have to be supplied in both intel and ARM binary formats. That means twice as much testing at the dev stage for new OS versions too. Which is why at some point a future OS version will only come with the ARM versions, and all your old apps will become history. The alternative would have been to keep frameworks/libs avialable in 32/64 bit versions, and in PowerPC/intel/ARM versions. So, 5 or 6
versions of all libs and frameworks.
Windows is going to face something similar too, if they really want to push moving to ARM.
"56g.1183" <56g.1183@ztq4.net> writes:
On 12/22/23 8:21 AM, Björn Lundin wrote:
On 2023-12-22 05:49, 56g.1183 wrote:
There are rumors that Winders is headed in the same directionAny links to confirm that ?
because what-is has become just a total Gordian Knot and they can't
maintain it or predict interactions.
Not as such ... I just "hear things", 'rumors', as said.
It'd be a big Top Secret at M$ regardless.
However I think it's their ONLY sane path. Apple knew
this some time ago. M$ has been stubborn - but it's
still an increasing functional/security nightmare and
SOMETHING needs to be done.
This idea is detached from reality. Microsoft are committed to backward compatibility with existing application software, and maintaining that
in such a fundamental rewrite is not possible.
https://www.hyrumslaw.com/ is relevant here.
Apple have never had the same compatibility goals, and have regularly
killed application software backward compatibility.
Time to legally NEGATE those "I Accept" legal protections
in Winders. Joe Average, indeed Corp Average, does NOT know
what it's agreeing to. Individuals/Corps/Govt deserve
umpteen BILLIONS in compensation for its CRAP system.
back door to me - but then I've always maintained that automatic
updates are one of the biggest potential security holes.
In any case, I just don't think M$ really has a realistic CHOICE in
the short/medium term. Winders is a TOTAL KLUDGE at this point and
there's NO closing all the security and functional gaps.
Winders has constantly proven itself highly vulnerable to a variety of attacks. Govt/health, even large corp interests, have been seriously compromised. It's almost become "normal" at this point - though it
SHOULDN'T be. M$ keeps greasing its politicians ... but I'm not sure
how long that strategy can continue to protect them as this crap
escalates out of control. The new, fucked, world situation is likely
to push all this over the edge as more State-funded players get
involved.
back door to me - but then I've always maintained that automatic
updates are one of the biggest potential security holes.
It was, now there's SAaS.
On Mon, 25 Dec 2023 20:02:07 GMT
Charlie Gibbs <cgibbs@kltpzyxm.invalid> wrote:
back door to me - but then I've always maintained that automatic
updates are one of the biggest potential security holes.
It was, now there's SAaS.
On 2023-12-25, Ahem A Rivet's Shot <steveo@eircom.net> wrote:
On Mon, 25 Dec 2023 20:02:07 GMT
Charlie Gibbs <cgibbs@kltpzyxm.invalid> wrote:
back door to me - but then I've always maintained that automatic
updates are one of the biggest potential security holes.
It was, now there's SAaS.
Yup. See my .sig below.
On Wed, 27 Dec 2023 04:06:05 GMT
Charlie Gibbs <cgibbs@kltpzyxm.invalid> wrote:
On 2023-12-25, Ahem A Rivet's Shot <steveo@eircom.net> wrote:
On Mon, 25 Dec 2023 20:02:07 GMT
Charlie Gibbs <cgibbs@kltpzyxm.invalid> wrote:
back door to me - but then I've always maintained that automatic
updates are one of the biggest potential security holes.
It was, now there's SAaS.
Yup. See my .sig below.
Hmm - sooner or later they're going to realise that ...
a) Data centres are expensive and unpopular
b) Users have oodles of unused compute and store resources
c) Many of them have high bandwidth internet connections
d) Distributed architectures can be very robust
... and start parking VMs in customers computers and lacing them together in VPNs to provide the services they're selling. Your downloaded application will in fact be a remotely administered hypervisor.
Hmm - sooner or later they're going to realise that ...
a) Data centres are expensive and unpopular
b) Users have oodles of unused compute and store resources
c) Many of them have high bandwidth internet connections
d) Distributed architectures can be very robust
... and start parking VMs in customers computers and lacing them together in VPNs to provide the services they're selling. Your downloaded application will in fact be a remotely administered hypervisor.
"Ahem A Rivet's Shot" <steveo@eircom.net> wrote:
Hmm - sooner or later they're going to realise that ...
a) Data centres are expensive and unpopular
b) Users have oodles of unused compute and store resources
c) Many of them have high bandwidth internet connections
d) Distributed architectures can be very robust
... and start parking VMs in customers computers and lacing them
together in VPNs to provide the services they're selling. Your
downloaded application will in fact be a remotely administered
hypervisor.
How do they propose to install that on my machine without anyone
noticing?
On Fri, 22 Dec 2023 23:22:55 -0500
"56g.1183" <56g.1183@ztq4.net> wrote:
The crux of the proverbial biscuit lies with the
practice of re-compiling apps/updates when you go
to install them.
I'll say it again - this does NOT happen. You have thoroughly misunderstood the root cause of the problem, which is that there the available packages do not form a coherent set in most (all?) Linux distros. Instead it is easy to find yourself installing package A which depends on library L < 1.0 and package B which depends on library L > 2.0.
This does not happen with FreeBSD packages because they are all
built together as a consistent set. The downside of this approach is that
the package build takes several days and so the rate of package updates is limited.
On 27 Dec 2023 at 06:11:40 GMT, "Ahem A Rivet's Shot" <steveo@eircom.net> wrote:
Hmm - sooner or later they're going to realise that ...
a) Data centres are expensive and unpopular
b) Users have oodles of unused compute and store resources
c) Many of them have high bandwidth internet connections
d) Distributed architectures can be very robust
... and start parking VMs in customers computers and lacing them
together in VPNs to provide the services they're selling. Your
downloaded application will in fact be a remotely administered
hypervisor.
How do they propose to install that on my machine without anyone noticing?
You're mostly correct here. The prob is that anytime you apt install
anything the source of the desired pgm is compiled on YOUR box.
On 27/12/2023 06:11, Ahem A Rivet's Shot wrote:
On Wed, 27 Dec 2023 04:06:05 GMT
... and start parking VMs in customers computers and lacing them
together in VPNs to provide the services they're selling. Your
downloaded application will in fact be a remotely administered
hypervisor.
aka a botnet
On Wed, 27 Dec 2023 11:11:56 +0000
The Natural Philosopher <tnp@invalid.invalid> wrote:
On 27/12/2023 06:11, Ahem A Rivet's Shot wrote:
On Wed, 27 Dec 2023 04:06:05 GMT
... and start parking VMs in customers computers and lacing themaka a botnet
together in VPNs to provide the services they're selling. Your
downloaded application will in fact be a remotely administered
hypervisor.
Yes but hopefully one with rules that you choose to join rather than one that just takes root in your systems.
On 12/22/23 8:21 AM, Björn Lundin wrote:
On 2023-12-22 05:49, 56g.1183 wrote:
There are rumors that Winders is headed in the same direction
because what-is has become just a total Gordian Knot and they
can't maintain it or predict interactions.
Any links to confirm that ?
Not as such ... I just "hear things", 'rumors', as said.
It'd be a big Top Secret at M$ regardless.
On Sat, 23 Dec 2023 09:50:46 +0000, Ahem A Rivet's Shot
<steveo@eircom.net> wrote:
This does not happen with FreeBSD packages because they are all
built together as a consistent set. The downside of this approach is
that the package build takes several days and so the rate of package updates is limited.
Although presumably you can get into the same situation with external /
third party / private packages.
And with any good package management
system - I include dpkg (even though that is typically binary based) and portage and FreeBSD's ports - not very common at all.
You install the cloud application suite
On 27 Dec 2023 at 16:26:44 GMT, "Ahem A Rivet's Shot" <steveo@eircom.net> wrote:
You install the cloud application suite
No I don't. I've not completed iCloud setup on my iPhone or any of my
Macs. End of.
On 27 Dec 2023 at 06:11:40 GMT, "Ahem A Rivet's Shot" <steveo@eircom.net> wrote:
Hmm - sooner or later they're going to realise that ...
a) Data centres are expensive and unpopular
b) Users have oodles of unused compute and store resources
c) Many of them have high bandwidth internet connections
d) Distributed architectures can be very robust
... and start parking VMs in customers computers and lacing them together in >> VPNs to provide the services they're selling. Your downloaded application
will in fact be a remotely administered hypervisor.
How do they propose to install that on my machine without anyone noticing?
From an IT management viewpoint, it is easier to manage centralised
servers and smart terminals. I assumed that was the way things had gone
over the last decade? Cheap, low cost PCs, with serious computations
done on a server or the cloud..
On 2023-12-27, TimS <tim@streater.me.uk> wrote:
How do they propose to install that on my machine without anyone
noticing?
https://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootkit_scandal
for instance...
On 27 Dec 2023 at 06:11:40 GMT, "Ahem A Rivet's Shot" <steveo@eircom.net> wrote:
Hmm - sooner or later they're going to realise that ...
a) Data centres are expensive and unpopular
b) Users have oodles of unused compute and store resources
c) Many of them have high bandwidth internet connections
d) Distributed architectures can be very robust
... and start parking VMs in customers computers and lacing them together in >> VPNs to provide the services they're selling. Your downloaded application
will in fact be a remotely administered hypervisor.
How do they propose to install that on my machine without anyone noticing?
On 27 Dec 2023 11:30:52 GMT
TimS <tim@streater.me.uk> wrote:
On 27 Dec 2023 at 06:11:40 GMT, "Ahem A Rivet's Shot" <steveo@eircom.net>
wrote:
Hmm - sooner or later they're going to realise that ...
a) Data centres are expensive and unpopular
b) Users have oodles of unused compute and store resources
c) Many of them have high bandwidth internet connections
d) Distributed architectures can be very robust
... and start parking VMs in customers computers and lacing them
together in VPNs to provide the services they're selling. Your
downloaded application will in fact be a remotely administered
hypervisor.
How do they propose to install that on my machine without anyone noticing?
You install the cloud application suite, it consists of the
hypervisor which phones home, adds your resources to the pile and offers
you all the cloud applications you just signed up for.
On 2023-12-27, TimS <tim@streater.me.uk> wrote:
On 27 Dec 2023 at 06:11:40 GMT, "Ahem A Rivet's Shot" <steveo@eircom.net>
wrote:
Hmm - sooner or later they're going to realise that ...
a) Data centres are expensive and unpopular
b) Users have oodles of unused compute and store resources
c) Many of them have high bandwidth internet connections
d) Distributed architectures can be very robust
... and start parking VMs in customers computers and lacing them together in
VPNs to provide the services they're selling. Your downloaded application >>> will in fact be a remotely administered hypervisor.
How do they propose to install that on my machine without anyone noticing?
https://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootkit_scandal
"56g.1183" <56g.1183@ztq4.net> writes:
In any case, I just don't think M$ really has a realistic CHOICE in
the short/medium term. Winders is a TOTAL KLUDGE at this point and
there's NO closing all the security and functional gaps.
Winders has constantly proven itself highly vulnerable to a variety of
attacks. Govt/health, even large corp interests, have been seriously
compromised. It's almost become "normal" at this point - though it
SHOULDN'T be. M$ keeps greasing its politicians ... but I'm not sure
how long that strategy can continue to protect them as this crap
escalates out of control. The new, fucked, world situation is likely
to push all this over the edge as more State-funded players get
involved.
Linux and macOS also have a steady stream of vulnerabilities, so did the other Unix platforms when they were still relevant. Your hypothetical
rewrite of Windows on a Unix kernel would not solve anyone’s security issues, it would just introduce an extra layer of complexity for vulnerabilities and other defects to nest in.
Widely-discussed estimates in 2019/2020 were that about 70% of vulnerabilities were memory safety issues, so a more realistic option
for improving security is to rewrite key components into memory-safe languages. I don’t know what Apple or Microsoft’s (OS-level) response to this is but Linux is experimenting with support for Rust in the kernel,
which is pretty promising.
There’s a lot of other OS components than kernels, though, and a lot of vulnerabilities in applications; so don’t expect that 70% figure to fall rapidly even when the kernel situation does start to improve.
If you're REALLY paranoid
then a native ADA compiler, even though it's a just HATEFUL
language to use (no wonder Defense projects go 10X over
budget and are 10X later than promised ......)
Winders is just FULL of bad code that allows overflows to do
their evil. And no, it won't/can't "just be fixed-up" because
nobody even knows how it all WORKS anymore. Kludges on top
of kludges on top of kludges going back into the 80s.
On 2023-12-28 07:51, 56g.1183 wrote:
If you're REALLY paranoid
then a native ADA compiler, even though it's a just HATEFUL
language to use (no wonder Defense projects go 10X over
budget and are 10X later than promised ......)
That is hardly because of the language.
NVIDIA decided to leave C/c++ behind and use Ada/Spark instead (where
Spark is a subset of Ada with proofs - that is mathematical proof that
the code meets some criteria as in no runtime errors (given the hardware
does not break)
<https://www.adacore.com/uploads/techPapers/222559-adacore-nvidia-case-study-v5.pdf>
Some nice quotes from the paper
"Evaluating return on Investment (ROI) based on their
results, the POC team concluded that the engineering
costs associated with SPARK ramp-up (training,
experimentation, discovery of new tools, etc.) were
offset by gains in application security and verification
efficiency and thus offered an attractive trade-off."
I think that is enough to counter your statement above.
Besides, the Ada mandate was removed in the late 1990ites.
The cost explosion you refer to is with c/c++. Not with Ada.
And this is a general feeling you get when using Ada/Spark
“It’s very nice to know that once you’re done writing
an app in SPARK—even without doing a lot of testing
or line-by-line review—things like memory errors,
off-by-one errors, type mismatches, overflows,
underflows and stuff like that simply aren’t there,” Xu
said. “It’s also very nice to see that when we list our
tables of common errors, like those in MITRE’s CWE
list, large swaths of them are just crossed out. They’re
not possible to make using this language.”
IBM reinvented itself as a software and services house when the
integrated circuit meant any damned fool could build a mainframe or minicomputer.
IBMs mainframes today are PCS or blades running Linux... but they can
still run RPG, and COBOL. And all those legacy apps.
I think Microsoft needs to do similar.
Winders is like that. They need to simply go around and *write down*
every single call into windows, what it is supposed to do AND what it actually *does*, which is probably in itself a dirty man year of work
for an intern, and then hand that specification over to a team of
developers to recreate the API.
IF Windows wants to stay in the commercial desktop arena.
In terms of computing for the numpties, Apple is the ultimate dumbed
down consumer product, and they cant compete.
On 2023-12-28 07:51, 56g.1183 wrote:
If you're REALLY paranoid
then a native ADA compiler, even though it's a just HATEFUL
language to use (no wonder Defense projects go 10X over
budget and are 10X later than promised ......)
That is hardly because of the language.
NVIDIA decided to leave C/c++ behind and use Ada/Spark instead (where
Spark is a subset of Ada with proofs - that is mathematical proof that
the code meets some criteria as in no runtime errors (given the hardware
does not break)
On 28 Dec 2023 at 11:34:22 GMT, "The Natural Philosopher" <tnp@invalid.invalid> wrote:
IBM reinvented itself as a software and services house when the
integrated circuit meant any damned fool could build a mainframe or
minicomputer.
IBMs mainframes today are PCS or blades running Linux... but they can
still run RPG, and COBOL. And all those legacy apps.
I think Microsoft needs to do similar.
But they had that with Windows NT; perhaps they still have. Written by Dave Cutler of VAX/VMS fame, IIRC. But then they had to cripple it with the drive letter shit and a file system that doesn't allow an open file to be moved or deleted.
So even if they gave it Linux underpinnings, the user experience
would still be dreadful. After all, it's their mindset. Look at the ribbon in Office apps, whose contents simply vanish if you make a window narrower. And you do have to make a window narrower if, like most actual users, you only have one screen.
On 28 Dec 2023 at 11:34:22 GMT, "The Natural Philosopher" <tnp@invalid.invalid> wrote:
IBM reinvented itself as a software and services house when the
integrated circuit meant any damned fool could build a mainframe or
minicomputer.
IBMs mainframes today are PCS or blades running Linux... but they can
still run RPG, and COBOL. And all those legacy apps.
I think Microsoft needs to do similar.
But they had that with Windows NT; perhaps they still have. Written by Dave Cutler of VAX/VMS fame, IIRC. But then they had to cripple it with the drive letter shit and a file system that doesn't allow an open file to be moved or deleted. So even if they gave it Linux underpinnings, the user experience would still be dreadful. After all, it's their mindset. Look at the ribbon in Office apps, whose contents simply vanish if you make a window narrower. And you do have to make a window narrower if, like most actual users, you only have one screen.
On Thu, 28 Dec 2023 11:34:22 +0000
The Natural Philosopher <tnp@invalid.invalid> wrote:
Winders is like that. They need to simply go around and *write down*
every single call into windows, what it is supposed to do AND what it
actually *does*, which is probably in itself a dirty man year of work
for an intern, and then hand that specification over to a team of
developers to recreate the API.
In order to do that it would have to stand still for a year, I
suspect that's not a possibility. AIUI the API changes to support
application development. The organisational challenges would be minimising the API freeze period and a smooth cut over.
IF Windows wants to stay in the commercial desktop arena.
In terms of computing for the numpties, Apple is the ultimate dumbed
down consumer product, and they cant compete.
Yet Apple is also the choice of many software developers when
corporate compliance/commercial compatibility is on the requirements list.
On 28/12/2023 12:24, Ahem A Rivet's Shot wrote:
On Thu, 28 Dec 2023 11:34:22 +0000
The Natural Philosopher <tnp@invalid.invalid> wrote:
IF Windows wants to stay in the commercial desktop arena.
In terms of computing for the numpties, Apple is the ultimate dumbed
down consumer product, and they cant compete.
Yet Apple is also the choice of many software developers when
corporate compliance/commercial compatibility is on the requirements
list.
Yes, because its a reasonable supported system, not because it has a
great look and feel.
Remember that what an app developer wants is a stable supported program launcher ONLY.
Writing in some 'foolproof' language is no substitute for a proper
testing and modification feedback loop to consistently test and improve
the product, and the danger is that people will think that it is.
On 28/12/2023 12:07, The Natural Philosopher wrote:
Writing in some 'foolproof' language is no substitute for a proper
testing and modification feedback loop to consistently test and
improve the product, and the danger is that people will think that it is.
Absolutely no Ada programmer imagines for a nanosecond that it is "foolproof", or any kind of panacea.
It is, however, a superb tool for writing reliable software,
as is attested by a great deal of practical experience.
BTW, my KDF9 emulator, *available for the RPi*, is written in Ada 2012.
See <http://www.findlayw.plus.com/KDF9/emulation/>.
The problem with Microsoft, is that there is no incentive to meet
a functional spec. Or be tested and upgraded and have bugs fixed.
All it has to do is *sell*.
Compared with - say - the avionics industry there is zero quality
control, in the ISO 9000 meaning of the word.
Even if they pay lip service to it.
Now what that means is that Microsoft is only really suitable for a
consumer market.
However I digress. In that my thesis is that its not so much the
language that you use, as the testing and quality control and feedback
into the design revisions that you implement, that in the end gets the
bugs down and the quality up, and that is something the Linux community
is pretty *good* at. Although it has no formal quality control, more a culture of 'if its *demonstrably* broken, fix it, as a matter of pride
and principle'.
If you look at - say aircraft - as the pinnacle of quality control, with every single incident subject to a 'what conditions, what pilots, what training, what maintenance (or lack of them) what design flaw,
ultimately contributed to this situation' then you can see how the
ultimate goal - that this situation never happen again. Ever.
They know there is no 'perfect' but by hammering away at the
demonstrably *imperfect - be it design, implementation, operational procedures, maintenance, pilot training, air traffic training,
aircraft performance and safety just keep getting better.
I see this as a dichotomy between 'we will sell lots of kit, because its bloody well designed' or 'we will sell lots of kit, because we will rush
it to market, and spend all the money on chrome and tailfins, and other marketing, and the numpties will just buy it, because they are numpties'.
It's perfectly *possible* to write bug free programs in C. As the help I
got here with my daemons memory leak demonstrated. Provided you test
the code and analyse the problems that result.
Writing in some 'foolproof' language is no substitute for a proper
testing and modification feedback loop to consistently test and improve
the product, and the danger is that people will think that it is.
On 28/12/2023 12:12, TimS wrote:
On 28 Dec 2023 at 11:34:22 GMT, "The Natural Philosopher"
<tnp@invalid.invalid> wrote:
IBM reinvented itself as a software and services house when the
integrated circuit meant any damned fool could build a mainframe or
minicomputer.
IBMs mainframes today are PCS or blades running Linux... but they can
still run RPG, and COBOL. And all those legacy apps.
I think Microsoft needs to do similar.
But they had that with Windows NT; perhaps they still have. Written by
Dave
Cutler of VAX/VMS fame, IIRC. But then they had to cripple it with the
drive
letter shit and a file system that doesn't allow an open file to be
moved or
deleted.
Those seem to be characteristics of the file system. I think the main crippling was done in NT 4.0 when they moved third party graphics
drivers into the Kernel layer (whatever that protected level is called
in Windows NT).
Although looking at support issues for the Arm Mali 610 on Linux, there
is talk about userland (userspace?) hacks, which suggests the debate
about where graphics drivers should live exists in Linux too.
So even if they gave it Linux underpinnings, the user experience
would still be dreadful. After all, it's their mindset. Look at the
ribbon in
Office apps, whose contents simply vanish if you make a window
narrower. And
you do have to make a window narrower if, like most actual users, you
only
have one screen.
I thought Windows NT was pretty good. You seem to be picking on GUI ergonomics as if that is a measure of OS quality. The noticeable
difference between Windows NT and Linux for me was always driver
support, in particular GPU drivers.
FWIW, Gnome Desktop + Pi OS on the rPi5 feels pretty solid, reminiscent
of Windows quality.
On 28/12/2023 16:13, moi wrote:
On 28/12/2023 12:07, The Natural Philosopher wrote:Or is it that the people who uses it also care deeply about reliable software, and have quality control systems in place and actually test
Writing in some 'foolproof' language is no substitute for a proper
testing and modification feedback loop to consistently test and
improve the product, and the danger is that people will think that it
is.
Absolutely no Ada programmer imagines for a nanosecond that it is
"foolproof", or any kind of panacea.
It is, however, a superb tool for writing reliable software,
as is attested by a great deal of practical experience.
their software?
On 28/12/2023 13:41, Pancho wrote:
FWIW, Gnome Desktop + Pi OS on the rPi5 feels pretty solid, reminiscent
of Windows quality.
"Windows" and "quality" in the same sentence?
Automating programming, delegating complexity to the programming
tools, memory management etc, always seemed to me the best way to
avoid errors. Maybe also isolating complexity in standard components, standard patterns. Why people still use C is a mystery to me.
Richard Kettlewell wrote:
Pancho <Pancho.Jones@proton.me> writes:
Automating programming, delegating complexity to the programming
tools, memory management etc, always seemed to me the best way to
avoid errors. Maybe also isolating complexity in standard components,
standard patterns. Why people still use C is a mystery to me.
Multiple reasons:
- practical difficulty of migrating a large codebase to a new language
- commercial difficulty of migrating to a new language
- resistance to change
- ignorance of C’s deficiencies
Some people like Linus Torvalds do not like object oriented programming. I
am pretty sure he will never change his mind on that. On top of this there
is historically a theoretical divide between functional and object oriented approach. Especially for hardware the functional approach seems to be rectified, however for GUI stuff etc. - it is masohistic thing (IMO).
Regarding "C’s deficiencies" I do not agree. It is rather the lack of programmers experience and complexity of the code that leads to such deficiencies.
To people like me who migrated from assembler to C, it was heaven. All
of the power of assembler but with a neat logical way to express the
more usual constructs. And local variables on the stack! Wow! clever
stuff. Obviously you had to worry about not overwriting the stack return addresses though.
We didn't expect to have our bottoms wiped for us by the language.
On 29/12/2023 17:15, The Natural Philosopher wrote:
To people like me who migrated from assembler to C, it was heaven. All
of the power of assembler but with a neat logical way to express the
more usual constructs. And local variables on the stack! Wow! clever
stuff. Obviously you had to worry about not overwriting the stack
return addresses though.
We didn't expect to have our bottoms wiped for us by the language.
Gosh, how macho! I bet you are really butch.
Maybe also isolating complexity in standard components, standard
patterns. Why people still use C is a mystery to me.
On 29/12/2023 17:34, moi wrote:
On 29/12/2023 17:15, The Natural Philosopher wrote:Are you all right?
To people like me who migrated from assembler to C, it was heaven.
All of the power of assembler but with a neat logical way to express
the more usual constructs. And local variables on the stack! Wow!
clever stuff. Obviously you had to worry about not overwriting the
stack return addresses though.
We didn't expect to have our bottoms wiped for us by the language.
Gosh, how macho! I bet you are really butch.
You sound a little other planetary.
Perhaps an aspirin and some warm Ovaltine?
On 29/12/2023 17:55, The Natural Philosopher wrote:
On 29/12/2023 17:34, moi wrote:
On 29/12/2023 17:15, The Natural Philosopher wrote:Are you all right?
To people like me who migrated from assembler to C, it was heaven.
All of the power of assembler but with a neat logical way to express
the more usual constructs. And local variables on the stack! Wow!
clever stuff. Obviously you had to worry about not overwriting the
stack return addresses though.
We didn't expect to have our bottoms wiped for us by the language.
Gosh, how macho! I bet you are really butch.
You sound a little other planetary.
Perhaps an aspirin and some warm Ovaltine?
Is that best a big strong guy who wipes his own bottom can do?
Yes, but in fact a whole swathe of NEW errors become possible, instead :-)
I am very dubious about languages that claim to solve coding problems.
I am sure they will find ways
to fuck anything up.
The problem with Microsoft
Compared with - say - the avionics industry there is zero quality
control, in the ISO 9000 meaning of the word.
It's perfectly *possible* to write bug free programs in C.
As the help I
got here with my daemons memory leak demonstrated. Provided you test
the code and analyse the problems that result.
Writing in some 'foolproof' language is no substitute for a proper
testing and modification feedback loop to consistently test and improve
the product, and the danger is that people will think that it is.
To people like me who migrated from assembler to C, it was heaven.
The Natural Philosopher wrote:
To people like me who migrated from assembler to C, it was heaven.
And the intention of the language. To be portable assembler.
Björn Lundin <bnl@nowhere.com> writes:
The Natural Philosopher wrote:
To people like me who migrated from assembler to C, it was heaven.
And the intention of the language. To be portable assembler.
I’m not sure that’s born out by the history (e.g. [1]).
On 2023-12-30 21:19, Richard Kettlewell wrote:
Björn Lundin <bnl@nowhere.com> writes:
The Natural Philosopher wrote:
To people like me who migrated from assembler to C, it was heaven.
And the intention of the language. To be portable assembler.
I’m not sure that’s born out by the history (e.g. [1]).
Hmm, me surprised and some googling later, I see that.
I've heard a lot of times about c was created because
they (K&R) wanted to write a portable OS to one of
the early PDPs, instead of rewriting it in assembler for that particular
CPU. But it seems to be an urban legend.
I stand corrected
The problem was really that C was *so* good, that people did start to
write hugely complex stuff in it, and using people who wouldn't know a register or a stack pointer if it poked them in the eye or how DMA worked...to write them.
I think stdlib was all we got. String handling mainly. And possibly
floating point but on a 6809? Seriously?
On 31/12/2023 09:59, Ahem A Rivet's Shot wrote:
On Sun, 31 Dec 2023 08:28:28 +0000
The Natural Philosopher <tnp@invalid.invalid> wrote:
The problem was really that C was *so* good, that people did start to
write hugely complex stuff in it, and using people who wouldn't know a
register or a stack pointer if it poked them in the eye or how DMA
worked...to write them.
There were two other factors in the rise of C. You could get a C
compiler for just about anything, importantly there were several for CP/M. >> There weren't many decent languages that were that widely available. Also
almost every university CS course used it from very early on (Cambridge
being the notable exception because Martin Richards was there) so from
around 1980 there were a *lot* of people trained in C.
I thought university CS courses of the era avoided C and preferred more academic, pedagogical languages: Pascal, Prolog, Smalltalk, ML, Lisp.
I was taught both OO and functional programming before I ever met C at
work, which may be why I was positive about OO-Design, C++ when it came along.
To this day I still prefer my brackets (C, C++, C#) in Pascal style
rather than K&R, which I begrudgingly use with Java.
On Sun, 31 Dec 2023 08:28:28 +0000
The Natural Philosopher <tnp@invalid.invalid> wrote:
The problem was really that C was *so* good, that people did start to
write hugely complex stuff in it, and using people who wouldn't know a
register or a stack pointer if it poked them in the eye or how DMA
worked...to write them.
There were two other factors in the rise of C. You could get a C compiler for just about anything, importantly there were several for CP/M. There weren't many decent languages that were that widely available. Also almost every university CS course used it from very early on (Cambridge
being the notable exception because Martin Richards was there) so from
around 1980 there were a *lot* of people trained in C.
The Natural Philosopher <tnp@invalid.invalid> writes:
I think stdlib was all we got. String handling mainly. And possibly
floating point but on a 6809? Seriously?
Sure, why not, it was routine on 8-bit micros. The Dragon 32 is an
example that used the 6809 specifically.
On 31/12/2023 09:59, Ahem A Rivet's Shot wrote:
On Sun, 31 Dec 2023 08:28:28 +0000
The Natural Philosopher <tnp@invalid.invalid> wrote:
The problem was really that C was *so* good, that people did start to
write hugely complex stuff in it, and using people who wouldn't know a
register or a stack pointer if it poked them in the eye or how DMA
worked...to write them.
There were two other factors in the rise of C. You could get a C
compiler for just about anything, importantly there were several for
CP/M.
There weren't many decent languages that were that widely available. Also
almost every university CS course used it from very early on (Cambridge
being the notable exception because Martin Richards was there) so from
around 1980 there were a *lot* of people trained in C.
I thought university CS courses of the era avoided C and preferred more academic, pedagogical languages: Pascal, Prolog, Smalltalk, ML, Lisp.
The benefit of C was that it was closer to assembler and suited the low
power CPUs of the time, when programmers needed to think close to the
metal in order to achieve acceptable performance.
On the job, C was easy to learn and the 'C Programming Language' was a
very good manual.
I was taught both OO and functional programming before I ever met C at
work, which may be why I was positive about OO-Design, C++ when it came along.
To this day I still prefer my brackets (C, C++, C#) in Pascal style
rather than K&R, which I begrudgingly use with Java.
On 31/12/2023 12:09, The Natural Philosopher wrote:
On 31/12/2023 11:35, Pancho wrote:
On 31/12/2023 09:59, Ahem A Rivet's Shot wrote:Compscis had their head in the clouds and their noses stuck up their
On Sun, 31 Dec 2023 08:28:28 +0000
The Natural Philosopher <tnp@invalid.invalid> wrote:
The problem was really that C was *so* good, that people did start to >>>>> write hugely complex stuff in it, and using people who wouldn't know a >>>>> register or a stack pointer if it poked them in the eye or how DMA
worked...to write them.
There were two other factors in the rise of C. You could get a C >>>> compiler for just about anything, importantly there were several for
CP/M.
There weren't many decent languages that were that widely available.
Also
almost every university CS course used it from very early on (Cambridge >>>> being the notable exception because Martin Richards was there) so from >>>> around 1980 there were a *lot* of people trained in C.
I thought university CS courses of the era avoided C and preferred
more academic, pedagogical languages: Pascal, Prolog, Smalltalk, ML,
Lisp.
arses. We learnt how to code without any 'courses'
The benefit of C was that it was closer to assembler and suited theall that
low power CPUs of the time, when programmers needed to think close to
the metal in order to achieve acceptable performance.
On the job, C was easy to learn and the 'C Programming Language' was
a very good manual.
I was taught both OO and functional programming before I ever met C
at work, which may be why I was positive about OO-Design, C++ when it
came along.
To this day I still prefer my brackets (C, C++, C#) in Pascal style
rather than K&R, which I begrudgingly use with Java.
I think I do too.
Did pascal have curlies?
TimS used the indentation style name, "Whitesmith", which I'd never
heard before, so I looked it up. When I look back to then, compared to
now, the biggest difference for me is that I can just look stuff up. I
had no idea what Whitesmith meant, but a minute later I know. Back then,
I would have to spend ages trying to find out, scour multiple books, or
live in ignorance.
Apparently, my “Pascal Style” is called Allman.
<https://en.wikipedia.org/wiki/Indentation_style#Allman_style>
Pascal didn't have curlies, they had begin/end, but when I started programming C I indented my curlies the same as begin/end in Pascal
I had a friend with an Apple II and he said he couldnt code in C
because it had no curlies...on the kee bored
Apple is Apple, I've never used any of their stuff.
On 31/12/2023 11:35, Pancho wrote:
On 31/12/2023 09:59, Ahem A Rivet's Shot wrote:Compscis had their head in the clouds and their noses stuck up their
On Sun, 31 Dec 2023 08:28:28 +0000
The Natural Philosopher <tnp@invalid.invalid> wrote:
The problem was really that C was *so* good, that people did start to
write hugely complex stuff in it, and using people who wouldn't know a >>>> register or a stack pointer if it poked them in the eye or how DMA
worked...to write them.
There were two other factors in the rise of C. You could get a C >>> compiler for just about anything, importantly there were several for
CP/M.
There weren't many decent languages that were that widely available.
Also
almost every university CS course used it from very early on (Cambridge
being the notable exception because Martin Richards was there) so from
around 1980 there were a *lot* of people trained in C.
I thought university CS courses of the era avoided C and preferred
more academic, pedagogical languages: Pascal, Prolog, Smalltalk, ML,
Lisp.
arses. We learnt how to code without any 'courses'
The benefit of C was that it was closer to assembler and suited theall that
low power CPUs of the time, when programmers needed to think close to
the metal in order to achieve acceptable performance.
On the job, C was easy to learn and the 'C Programming Language' was a
very good manual.
I was taught both OO and functional programming before I ever met C at
work, which may be why I was positive about OO-Design, C++ when it
came along.
To this day I still prefer my brackets (C, C++, C#) in Pascal style
rather than K&R, which I begrudgingly use with Java.
I think I do too.
Did pascal have curlies?
I had a friend with an Apple II and he said he couldnt code in C because
it had no curlies...on the kee bored
To this day I still prefer my brackets (C, C++, C#) in Pascal style
rather than K&R, which I begrudgingly use with Java.
Whitesmith's for me.
I discovered I use Whitesmith style too.
To me it makes a block look like a block.
On 31/12/2023 09:59, Ahem A Rivet's Shot wrote:
On Sun, 31 Dec 2023 08:28:28 +0000
The Natural Philosopher <tnp@invalid.invalid> wrote:
The problem was really that C was *so* good, that people did start to
write hugely complex stuff in it, and using people who wouldn't know a
register or a stack pointer if it poked them in the eye or how DMA
worked...to write them.
There were two other factors in the rise of C. You could get a C
compiler for just about anything, importantly there were several for CP/M. >> There weren't many decent languages that were that widely available. Also
almost every university CS course used it from very early on (Cambridge
being the notable exception because Martin Richards was there) so from
around 1980 there were a *lot* of people trained in C.
I thought university CS courses of the era avoided C and preferred more academic, pedagogical languages: Pascal, Prolog, Smalltalk, ML, Lisp.
On 31 Dec 2023 at 11:35:35 GMT, "Pancho" <Pancho.Jones@proton.me> wrote:
I thought university CS courses of the era avoided C and preferred more
academic, pedagogical languages: Pascal, Prolog, Smalltalk, ML, Lisp.
My postgrad CS course was 1967/68 and we had a small (but ample) exposure to Lisp, and also some flavour of Algol on the department's IBM 7094. There was some clumsiness about using the Algol implementation that is now lost in the mists of time - a character set limitation, perhaps.
My postgrad CS course was 1967/68 and we had a small (but ample) exposure to >> Lisp, and also some flavour of Algol on the department's IBM 7094. There was >> some clumsiness about using the Algol implementation that is now lost in the >> mists of time - a character set limitation, perhaps.
I remember that. It had something to do with enclosing all keywords
in apostrophes in place of the bold-faced type in the reference books.
It was nasty both in appearance and typing.
On 31 Dec 2023 at 21:36:25 GMT, "Charlie Gibbs" <cgibbs@kltpzyxm.invalid> wrote:
I remember that. It had something to do with enclosing all keywords
in apostrophes in place of the bold-faced type in the reference books.
It was nasty both in appearance and typing.
Yes, yes !! That was it. Quite why we had to do that was a mystery.
On 31 Dec 2023 22:50:01 GMT
TimS <tim@streater.me.uk> wrote:
On 31 Dec 2023 at 21:36:25 GMT, "Charlie Gibbs" <cgibbs@kltpzyxm.invalid>
wrote:
I remember that. It had something to do with enclosing all keywords
in apostrophes in place of the bold-faced type in the reference books.
It was nasty both in appearance and typing.
Yes, yes !! That was it. Quite why we had to do that was a mystery.
It was so that the set of keywords in the language could be
extended without any risk of them ever being mistaken for variables. The
idea was that keywords were picked out by "stropping" them either by CASE
or with 'quotes' or by typeface (bold usually) instead of there being a set of keywords that could not be used as variable names.
On 01 Jan 2024 at 05:45:08 GMT, "Ahem A Rivet's Shot" <steveo@eircom.net> wrote:
It was so that the set of keywords in the language could be
extended without any risk of them ever being mistaken for variables. The idea was that keywords were picked out by "stropping" them either by
CASE or with 'quotes' or by typeface (bold usually) instead of there
being a set of keywords that could not be used as variable names.
Yebbut that was never going to work, given the technology of the day. I
think I wrote one program in it and didn't bother after that.
On 1 Jan 2024 10:31:05 GMT
TimS <tim@streater.me.uk> wrote:
On 01 Jan 2024 at 05:45:08 GMT, "Ahem A Rivet's Shot" <steveo@eircom.net>
wrote:
It was so that the set of keywords in the language could be
extended without any risk of them ever being mistaken for variables. The >>> idea was that keywords were picked out by "stropping" them either by
CASE or with 'quotes' or by typeface (bold usually) instead of there
being a set of keywords that could not be used as variable names.
Yebbut that was never going to work, given the technology of the day. I
think I wrote one program in it and didn't bother after that.
But it *did* work - you could even type the code stropped with
quotes or UPPER CASE and print it stropped with bold face or italic.
Did pascal have curlies?
On 01 Jan 2024 at 12:06:22 GMT, "Ahem A Rivet's Shot" <steveo@eircom.net> wrote:
On 1 Jan 2024 10:31:05 GMT
TimS <tim@streater.me.uk> wrote:
On 01 Jan 2024 at 05:45:08 GMT, "Ahem A Rivet's Shot"
<steveo@eircom.net> wrote:
It was so that the set of keywords in the language could be
extended without any risk of them ever being mistaken for variables.
The idea was that keywords were picked out by "stropping" them either
by CASE or with 'quotes' or by typeface (bold usually) instead of
there being a set of keywords that could not be used as variable
names.
Yebbut that was never going to work, given the technology of the day. I
think I wrote one program in it and didn't bother after that.
But it *did* work - you could even type the code stropped with
quotes or UPPER CASE and print it stropped with bold face or italic.
Not on standard line printers of the day which IIRC didn't even have lower case. And the quotes were just a pain.
On 2024-01-01, Ahem A Rivet's Shot <steveo@eircom.net> wrote:
On 31 Dec 2023 22:50:01 GMT
TimS <tim@streater.me.uk> wrote:
On 31 Dec 2023 at 21:36:25 GMT, "Charlie Gibbs" <cgibbs@kltpzyxm.invalid> >>> wrote:
I remember that. It had something to do with enclosing all keywords
in apostrophes in place of the bold-faced type in the reference books. >>>> It was nasty both in appearance and typing.
Yes, yes !! That was it. Quite why we had to do that was a mystery.
It was so that the set of keywords in the language could be
extended without any risk of them ever being mistaken for variables. The
idea was that keywords were picked out by "stropping" them either by CASE
or with 'quotes' or by typeface (bold usually) instead of there being a set >> of keywords that could not be used as variable names.
That makes sense. Remember COBOL reserved words?
On 31 Dec 2023 22:50:01 GMT
TimS <tim@streater.me.uk> wrote:
On 31 Dec 2023 at 21:36:25 GMT, "Charlie Gibbs" <cgibbs@kltpzyxm.invalid>
wrote:
I remember that. It had something to do with enclosing all keywords
in apostrophes in place of the bold-faced type in the reference books.
It was nasty both in appearance and typing.
Yes, yes !! That was it. Quite why we had to do that was a mystery.
It was so that the set of keywords in the language could be
extended without any risk of them ever being mistaken for variables. The
idea was that keywords were picked out by "stropping" them either by CASE
or with 'quotes' or by typeface (bold usually) instead of there being a set of keywords that could not be used as variable names.
On 01 Jan 2024 at 19:04:13 GMT, "Charlie Gibbs" <cgibbs@kltpzyxm.invalid> wrote:
On 2024-01-01, Ahem A Rivet's Shot <steveo@eircom.net> wrote:
On 31 Dec 2023 22:50:01 GMT
TimS <tim@streater.me.uk> wrote:
On 31 Dec 2023 at 21:36:25 GMT, "Charlie Gibbs"
<cgibbs@kltpzyxm.invalid> wrote:
I remember that. It had something to do with enclosing all keywords >>>> in apostrophes in place of the bold-faced type in the reference
books. It was nasty both in appearance and typing.
Yes, yes !! That was it. Quite why we had to do that was a mystery.
It was so that the set of keywords in the language could be
extended without any risk of them ever being mistaken for variables.
The idea was that keywords were picked out by "stropping" them either
by CASE or with 'quotes' or by typeface (bold usually) instead of
there being a set of keywords that could not be used as variable names.
That makes sense. Remember COBOL reserved words?
What SQLite does is sort of the opposite. If you want to define a column
or table with what is in fact a reserved word, then you have to put double-quotes around it in your definition. If you avoid reserved words
then nothing (except strings) needs quoting. Much better than what Algol
did. Really off-putting, it was.
On 1 Jan 2024 19:08:40 GMT
TimS <tim@streater.me.uk> wrote:
On 01 Jan 2024 at 19:04:13 GMT, "Charlie Gibbs" <cgibbs@kltpzyxm.invalid>
wrote:
On 2024-01-01, Ahem A Rivet's Shot <steveo@eircom.net> wrote:
On 31 Dec 2023 22:50:01 GMTThat makes sense. Remember COBOL reserved words?
TimS <tim@streater.me.uk> wrote:
On 31 Dec 2023 at 21:36:25 GMT, "Charlie Gibbs"
<cgibbs@kltpzyxm.invalid> wrote:
I remember that. It had something to do with enclosing all keywords >>>>>> in apostrophes in place of the bold-faced type in the reference
books. It was nasty both in appearance and typing.
Yes, yes !! That was it. Quite why we had to do that was a mystery.
It was so that the set of keywords in the language could be
extended without any risk of them ever being mistaken for variables.
The idea was that keywords were picked out by "stropping" them either
by CASE or with 'quotes' or by typeface (bold usually) instead of
there being a set of keywords that could not be used as variable names. >>>
What SQLite does is sort of the opposite. If you want to define a column
or table with what is in fact a reserved word, then you have to put
double-quotes around it in your definition. If you avoid reserved words
then nothing (except strings) needs quoting. Much better than what Algol
did. Really off-putting, it was.
That does not achieve what the Algol stropping achieves, which is
to ensure that code does not need to be changed when the language is extended. If a new keyword is added to Algol it doesn't matter if code uses that word as a variable, it will still compile correctly and do what it
ever did.
TimS <tim@streater.me.uk> writes:
"Ahem A Rivet's Shot" <steveo@eircom.net> wrote:
That does not achieve what the Algol stropping achieves, which is to
ensure that code does not need to be changed when the language is
extended. If a new keyword is added to Algol it doesn't matter if
code uses that word as a variable, it will still compile correctly
and do what it ever did.
Funny how no other language I've come across in the intervening 55 or
more years has found it necessary to do that.
The problem still exists, other languages just don’t attempt to solve
it. All the impacted users get to modify their code to work around the
damage instead.
"Ahem A Rivet's Shot" <steveo@eircom.net> wrote:
That does not achieve what the Algol stropping achieves, which is to
ensure that code does not need to be changed when the language is
extended. If a new keyword is added to Algol it doesn't matter if
code uses that word as a variable, it will still compile correctly
and do what it ever did.
Funny how no other language I've come across in the intervening 55 or
more years has found it necessary to do that.
Did pascal have curlies?
Sysop: | Weed Hopper |
---|---|
Location: | Clearwater, FL |
Users: | 14 |
Nodes: | 6 (0 / 6) |
Uptime: | 231:01:53 |
Calls: | 55 |
Calls today: | 1 |
Files: | 50,127 |
D/L today: |
29 files (3,538K bytes) |
Messages: | 275,355 |