8 years ago / 112 comments: https://news.ycombinator.com/item?id=6156238
6 years ago / 22 comments: https://news.ycombinator.com/item?id=9584172
Otherwise, if you have a doubt after posting a story, you can click on the website name next to the title to see other stories from the same website, which is another way of seeing duplicates. Of course, this is easier with websites that don’t get 20 stories a day.
I thought maybe post deduplication "expires", and links are let through for discussion again x-weeks later. I also noticed dang posts these "previously discussed" links often, if not preempted by other users. I don't know whether to upvote or downvote those because it contributes to the convo but indirectly and with low-effort. Therefore I figure an automation might be worth the hassle.
Previously discussed:
https://news.ycombinator.com/item?id=25227906
https://news.ycombinator.com/item?id=18865276
https://news.ycombinator.com/item?id=29024525
https://news.ycombinator.com/item?id=26308945
https://news.ycombinator.com/item?id=19157839
https://news.ycombinator.com/item?id=28355861
https://news.ycombinator.com/item?id=24278472
https://news.ycombinator.com/item?id=21897355
Personally, I think of him more like Palmer Eldritch, with three stigmata of a robotic right hand, artificial eyes, and steel teeth, who has returned from an expedition to the Prox system, in possession of a new alien hallucinogen Chew-Z to compete with Can-D.
https://en.wikipedia.org/wiki/The_Three_Stigmata_of_Palmer_Eldritch
Once you hear Jean-Michel Jarre's and Lauri Anderson's secret lyrics, you can’t unhear them:
https://www.youtube.com/watch?v=_x-v8KamefA
Heed the Android Sisters’ dire warning:
(One of which is by the dang robot itself ;-)
This comment has been a service of the dredmorbious bot, a/k/a Robby.
This is how our robot overlords begin the takeover- hiding in plain sight, pretending to be one of us, while secretly gaining more and more power over us.
Do they pay him with silicon upgrades? Does he dream of electric sheep?
Are we the electric sheep?
'They Live' was a warning...when we put on the google glasses will we see their real metal skin?
https://www.youtube.com/watch?v=NP8bOqTAco0
Album: Songs Of Electronic Despair (1984)
THE AWESOME FUTURISTIC KITSCH OF THE ANDROID SISTERS:
>Since 1982, the ZBS Foundation (ZBS= “Zero Bullshit”) has been producing a sci-fi/detective hybrid radio drama called Ruby, the ongoing adventures of Ruby the Galactic Gumshoe. The show is a fun listen, and since its history is documented elsewhere, we shan’t dwell on it here, as the series itself doesn’t concern us so much as does a pair of its supporting characters: The Android Sisters.
>Exactly like it says on the box, the Android Sisters are robotic “siblings”—conceit and name both lifted from Philip K. Dick’s Do Androids Dream of Electric Sheep?—whose role in the show is to deliver pointedly satiric songs, rendered in a unison speak-sing intonation by actresses Ruth Maleczech (sometimes credited as “Breuer”—married name?) and Valaria Wasilewski. Though it’s a pretty one-dimensional schtick, their acutely ‘80s synth songs were sufficiently listenable to merit an album in 1984. Released on the typically more rootsy Vanguard label (it was the home of Joan Baez and Buddy Guy, among others), Songs of Electronic Despair contained eleven goofy examples of what people in the ‘80s thought the future sounded like, and many of the songs directly address the themes of mechanization and alienation with which a lot of the synth musicians of the era seemed obsessed. Really, much of this stuff is in the same zone as the work Laurie Anderson was up to back then—and Anderson was once an artist in residence at the ZBS Foundation.
>A second album, Pull No Punches, was issued in 2003, but the most accessible music available from the Android Sisters is their 2004 best-of, which is still in print on CD.
Android Sisters Playlist:
https://www.youtube.com/watch?v=ab0vApijEBM&list=PLqc7dhQXRchKAZobMiT8pLjRTQxQC1ko5
It’s borderline gaslighting at this point. I imagine it’s causing some people to do some extra deep digging while they try to reassure themselves that they do actually know a thing or two.
Ony 6 comments, though numerous updoots. I usually consider < 10 comments minor. HN does permit reposts after a seemly interval.
In my post I just picked the first main stream media outlet (CBC in my post). today’s post is much better.
1. https://gigazine.net/gsc_news/en/20200720-cia-spies-xerox-machine/
He also has a talk about mining data from German Railway (Deutsche Bahn) which is good.
My comment was a regrettably rude way of asking if an English translation is available elsewhere, and fortunately it worked :)
The Youtube channel is there only, because people didn't respected the unwritten rules and put the content on Youtube anyway. But we like to distribute the content always from its original source, also with privacy in mind.
The “unwritten” rules end up hugely restricting the dissemination of this, and other, awesome content on ideological grounds that not everyone feels as strongly about as you and the CCC.
So do upload it to YouTube and anywhere else, spread the knowledge and link to whatever you please.
I have a habit of scanning invoices/statements/manuals etc to pdf and found the above quite alarming. Though I don't use Xerox (my scanner is Epson Perfection V39), I never considered the possibility that a scan of a document can result in altered numbers. How do I know if my scanner could also be affected in case it uses similar underlying software/setup?
> In the next section, there is a a short manual for you to reproduce the error and see of you are affected.
In my opinion, the only reasonable mode for JBIG2 is lossless, as the lossy mode of JBIG2 is, in general, prone to these character mangling issues. File sizes for lossless JBIG2 compression are already very low, so I would claim that using lossless is almost always worth it, unless character mangling is explicitly not a problem.
If however it uses lossy JBIG2 compression, it is quite possible that it suffers from the same problem, due to how lossy JBIG2 compression works.
Which brings up another point: How did Xerox's SW development process take place? Was it done in-house with frequent code reviews or was it outsourced to a low bidder and then thrown back over the wall with no real quality assurance process?
Codes reviews and unit tests aren't a substitute for QA or acceptance testing. It's great when code reviews do catch stuff like this, but that's not going to catch everything. To me this bug points to deficiencies in another part of the process.
PDF is widely more common and there are a million different ways you can write them. Including with JBIG2 images.
I guess, if the scanner offers JBIG2 format option, it may need a test at least.
Who else used to use scanners?
Apps like Office Lens will undistort the pic and it just looks good enough. (App knows you are taking a picture of A4 page, so it knows how to make it look straight + white.)
> CamScanner
And then it got bought and malware added. https://news.ycombinator.com/item?id=20818177These days on Android, Google Drive's scan functionality works just as well as CamScanner used to. I use it for a lot of cases where I'd previously use a proper scanner - it's good enough most of the time.
Or even Google Lens / Google Camera does a decent job of scanning in the special scan document mode, although you have to rely on the software actually recognizing that's what you're trying to do to get the button to show up to start the process.
Since then I've written an ImageMagick script that converts phone photographs of documents into reasonably convincing "scans."
Just like there are printer-scanners that won't scan if the ink is empty.
https://en.wikipedia.org/wiki/ISmell
>The iSmell Personal Scent Synthesizer developed by DigiScents Inc. is a small device that can be connected to a computer through a Universal serial bus (USB) port and powered using any ordinary electrical outlet. The appearance of the device is similar to that of a shark’s fin, with many holes lining the “fin” to release the various scents. Using a cartridge similar to a printer’s, it can synthesize and even create new smells from certain combinations of other scents. These newly created odors can be used to closely replicate common natural and manmade odors. The cartridges used also need to be swapped every so often once the scents inside are used up. Once partnered with websites and interactive media, the scents can be activated either automatically once a website is opened or manually. However, the product is no longer on the market and never generated substantial sales. Digiscent had plans for the iSmell to have several versions but did not progress past the prototype stage. The company did not last long and filed for bankruptcy a short time after.
This Wired Magazine article is a classic Marc Canter interview. I'm surprised they could smell the output of the iSmell USB device over the pungent bouquet from all the joints he was smoking:
You've Got Smell!
https://www.wired.com/1999/11/digiscent/
>DigiScent is here. If this technology takes off, it's gonna launch the next Web revolution. Joel Lloyd Bellenson places a little ceramic bowl in front of me and lifts its lid. "Before we begin," he says, "you need to clear your nasal palate." I peer into the bowl. "Coffee beans," explains Bellenson's partner, Dexster Smith. […]
>"You know, I don't think the transition from wood smoke to bananas worked very well." -Marc Canter
The failed quest to bring smells to the internet (thehustle.co)
https://thehustle.co/digiscents-ismell-fail
https://news.ycombinator.com/item?id=17476460
DigiScent had a booth at the 1999 Game Developers Conference, with scantily dressed young women in skunk costumes.
I told them about a game called "The Sims" I had been working on for a long time, and was hoping to finish and release some time soon.
They unsuccessfully tried to convince me to make The Sims support the iSmell, and even gave me a copy of the SDK documentation, because they thought it would enrich the player's experience of all those sweaty unwashed sims, blue puddles of piss on the floor, stopped up toilets in the bathroom, and plates of rotting food with flies buzzing around on the dining room table.
But large document archives are traditionally done with TIFF, and should still be done like that.
Its actually nightmarish.
A silent corruption of court-documents, stored plans and land-records, patent-filings with the originals often permanently gone.
Contracts that can be nulled and voided by a hostile party on this basis by simply challenging the document on basis of the archiving process.
A error so deep and systematic, it even mangled the birth certificate of the us-president when it was scanned. The whole affair is a huge advertisement for keeping written records and avoid digitization, same as voting machines.
I didn't like any of the existing Linux scanning solutions, so I built my own paperless management tool. I was evaluating different compression algorithms for keeping authoritative masters and was eyeing JPEG2000 (I hate JPEG artifacts). But in the end I decided to go with FLIF, which is lossless. Despite its lower amount of battle testing, being lossless allows me to run a sha256 on the decompressed image and know that it is exactly what I had scanned in.
It’s easy to ridicule this, and I do think that particular judge’s thought process was rooted in ignorance rather than plausible paranoia, but it’s not hard to see how cases like this Xerox bug sow the seeds of distrust in the non-tech populace. There really isn’t a lot of clear daylight between “the scanned copy might be different to the original” and “the zoomed version might be different to the original”.
Some mocked the judge but here's an example where an AI upscaling algorithm added a human face that was never there:
https://petapixel.com/2020/08/17/gigapixel-ai-accidentally-added-ryan-goslings-face-to-this-photo/
For terminology trivia, that algorithm seems to be doing a human version of pareidolia :https://en.wikipedia.org/wiki/Pareidolia
EDIT: based on some replies I see, my cite isn't to claim that Apple iOS is using AI enhancement during zoom to add fake pixels. The point is to consider why the judge (who is not a computer programmer) is skeptical. Therefore, his request for expert testimony to rule out that the zoom function doesn't add false information is not that unreasonable.
In other words, what's obvious to a HN tech audience doesn't always apply to judges who don't have the same technical knowledge of how zoom algorithms actually work. So considering the judge's state of mind, the "fact" that iOS zoom doesn't add false incriminating pixels isn't a fact to his level of satisfaction unless the prosecutors bring in an expert to confirm it.
One that relies on a neural network trained on other images is not, it might produce a subtly different image each time it's run and introduce artefacts from other images.
In the end the iPad was not used.
All of the standard zoom algorithms fall into that category. Whether bilinear, nearest neighbour, or similar, they’re all going to produce similar outcomes with which, importantly, a teenaged witness on the stand will be familiar. Same with the way TVs upscale content.
It’s not like this is new evidence being introduced in its zoomed-in form. It’s an iPad and footage or images being used in the same way prosecutors and defence attorneys have used them for years when cross-examining witnesses.
Of course you can, by having the image originally be displayed downscaled for instance. The back camera of your average phone has a much higher resolution than its display, or most computer displays for that matter.
Zooming is another issue entirely.
In the CSI "zoom and enhance", anything beyond the simplest forms of interpolation would be part of "enhance", which is to say, not part of "zoom".
And bilinear interpolation is as simple as you can get besides nearest neighbor, yet it still has the potential to introduce wrong information/cause artifacting. So even in the "CSI" sense, zooming can cause issues (unless you want to argue that only nearest neighbor is CSI 'zooming'? But then not even browsers do only CSI 'zooming').
It also only uses the available data in the image itself on which to work.
It's fundamentally different from neural network-driven inference and interpolation that attempts to fill in pixels using evidence from other images that the network has been trained on.
Courts have to deal with questions like this all the time and handle them well enough. Technology isn't magic, nor do we have to go down a rabbit hole on each point about what might be there rather than what is there.
So no, each time a given technique is used, the way it works has to be explained so that a decision based on facts of the case is made.
Courts have established mechanisms for handling technical questions around evidence. It's not like any of this is new.
In the end the iPad was not used. Instead a 4K TV was used, which ironically probably upscaled the image too.
Okay, so what's the issue then?
I would argue they DO NOT handle them well enough at all, and I have a feeling that if the person on trial had a different political affiliation you would also be claiming they do not.
Courts routinely get technology wrong, and use this the send innocent people to prison all the time. I would say the courts get technology related evidence wrong more than they do correct.
That's an ad hominem argument. In truth I don't care who's on trial, as the outcome of this specific trial is at this point an American culture war & politics issue that I'm uninterested in.
I am interested in how well courts handle technology and whether either prosecutors or defence attorneys can throw a wrench in proceedings with fanciful woo like "iPads, which are made by Apple, have artificial intelligence in them that allow things to be viewed through three dimensions and logarithms", and have that tolerated by judges who themselves are often technologically illiterate.
All of the other evidence was processed through forensics software, and displayed on Windows laptops. Then, at the last minute, the prosecutor asks to show this late-discovered evidence on an iPad. Watching the trial, that felt suspicious - why should this one piece of evidence be shown in a novel way?
It was being used to show the footage to the witness on the stand, in order for them to explain what’s shown on it. As has been done in American courts hundreds of times before.
Now sure, the witness and prosecution or defence could argue that a witness doesn’t need to make guesses about what they’re seeing on zoomed in low-quality footage when under cross-examination. That happens all the time, too.
What doesn’t happen every time is for iPads to be rejected for this purpose because of magical AI ‘logarithms’ inventing imagery.
https://twitter.com/frostycharacter/status/1459722429949878276/photo/1
That means the jury will be able to see this as its own exhibit. (It may have been assigned two exhibit numbers.)
And I'd say it makes sense. Each additional processing step can change the image. For example, if one used the waifu2x algorithm to scale the image up, that would certainly be wrong.
Language is malleable, and so while it may be paranoid to question a feature labelled as zoom now, it's an open question how long that will last.
That's not how technical claims of that nature are normally handled.
The fantastical claim was from the defence. Onus should be on them to prove it, rather than making the prosecution prove a negative.
It's also likely that precedent testimony exists on using an iPad to zoom into an image or video in other cases.
And both the prosecutions and defense's claims were fantastical, unsubstantiated (beyond some incorrect 'common sense' handwaving) and wrong - so since it's the prosecution that wants to enter new evidence, it's perfectly fine to ask them to back to it up. This isn't 'proving a negative', it's just asking them to prove that what they said (that zooming doesn't change the evidence).
Both prosecutors and defence attorneys have used the same mechanism countless times in previous cases, without issue, because it is actually common sense that pinch to zoom is not going to substantively change what's shown in a video like that.
In this case the prosecution wanted to significantly zoom into very grainy footage to ask about an object a few pixels wide, on footage so grainy the prosecution couldn't identify the face of their own lead witness.
The defense raised the question regarding zooning altering or adding pixels and the judge said flatly he didn't know. All the prosecution had to do was bring in an expert on video to explain how zoom works - or have the expert do a non-fractional proportional zoom.
Everything is common sence untill it isn't.
Its common sence that you can't take over control of a car over the internet and kill everyone in it, that acess to a website that shows realtime location of half of US polulation would at least be properly protected, that a car's accelerator pedal doesn't cause random and unpredictable effect thanks to spagghetti code, that when Fujistsu charges a postman for theft, they are not just covering up bug in their tracking and accounting software.
Which is why both sides can ask the other side to substantiate their claims or provide expert testimony to justify their claims. If there is no issue, then the other side will be able to relatively easily find someone to testify to that or provide other evidence.
It's how the system is meant to work. If this is being used abusively, the judge will catch on soon enough.
That's because criminal law is purposefully biased in favor of the defense. If the prosecution wants to argue that they defendant appears to be pointing his gun at somebody when they zoom in on a photo with an ipad then they need to prove that the ipad's software can be trusted to do so reliably. If there are programs that can upscale images by creating new data then they need to prove that the photo app on their ipad is not one of those programs.
This should have been done with a program that is open to scrutiny (and ideally one that's open-source) so that the algorithm used to upscale the image is known. It's not enough to just assume that the ipad uses bilinear interpolation when there's a reasonable doubt that it could be using another algorithm.
And yes, it's so well understood that the prosecution had no idea of what's going on, claiming that it's "common sense" that it works just like a magnifying glass, which is simply wrong. Using bilinear (or bicubic) interpolation makes the edges of an image blurrier, especially when you're zooming in as much as they did on low-quality footage.
Aside from the confusion about logarithms, where I presume he meant algorithms, and the false claim about 3D, the defence was making a claim about artificial intelligence being used to upscale the footage. Which is inaccurate, and a specific claim being made that's unsupported by any evidence.
Funny enough, the same sort of question arose with regard to a crime scene image that was enlarged by the state's forensic lab, but a simple explanation of the mechanism used was deemed sufficient.
Your claim was that the judge's call is wrong because zooming doesn't actually do interpolation and has thus no potential to introduce wrong information. I've told you that zooming in this case does interpolate and that interpolation has the potential to go wrong.
With AI being used in ever more parts of technology, it is not surprising that a layperson wants to clarify if such technology is being applied here.
Apple has been advertising the usage of AI in dealing with photos ("Deep Fusion" of 2019 IIRC), so the suspicion is warranted.
Also, things like Deep Fusion are capture-time technologies, they don't alter the photo or video each time it's viewed. There's also already precedent in US courts for handling automatic enhancements applied to digital camera evidence.
We know that. A random old judge? Any random you pick on the street for a survey? Probably not. All they heard is something about AI and photos.
Remember, for most users, a photo only has to be as accurate as their own memories. If extra detail is added to make the photo look better, it doesn't matter as long as it isn't detail the user remembers differently.
1: https://geekologie.com/2020/10/nvidia-creates-ai-video-compression-for.php
Not sure I follow. Someone questions a technical process because it might add extra information seems like a fairly cogent thought and awareness of tech. Even if it's not described or discussed in technical terms.
And furthermore that this isn’t a slight on him, given how hard it is for the public to divine exactly how technology works, as illustrated by Xerox.
From what the judge has seen in this trial, the prosecution has shown it will stoop to any low to get prejudicial evidence in front of the jury. I don't blame him at all for not just taking the prosecution's word for the veracity, authenticity and provenance of the photo.
I do not know how this relates to this specific trial, but if I were a judge I would want to rule out that footage used as evidence was manipulated by "AI".
The prosecution was arguing about what essentially came down to the location of 1-2 individual pixels that were present after the image had been interpolated. Basically: the pixels that the prosecution were trying to argue showed the pointing of a gun were:
1) Only apparent when the image was "zoomed" (interpolated) in exactly the manner they wanted to use.
2) Not apparent in any of the videos or photos which were taken closer to the place they were talking about.
The prosecution is depending on the idea that a computer can see more than a human can, and "enhance" a photo in such a way that shows something that nothing else does. It is absolutely ridiculous. Here is the image (linked above) which was itself cropped from a much larger image, and taken from about 150 meters away, at night, from the tiny sensor on a drone camera.
The judge further clarified by using an analogy, breathalyzers are often used by cops who dont know how they work, but the breathalyzers machine has already been judicially accepted as reliable evidence. This is not the case for upscale images.
By the way, reading the blog post I stumbled on quite a few minor typos (like "arrors" for "errors" etc.). Should be easy to fix, but in a way this adds more emphasis and mood to the content...
For example, my scanner has an 8 color compression mode. Great! So I scan in a piece of paper printed in 3 colors - black, white, and, say, green. The green comes out as a dithering of multiple colors. Why can't the compression format figure out what are the dominant 8 colors in the scan, put out a table of the RGB values of those 8 colors, then compress to 8 colors? This would look and work great for an awful lot of scans.
Another problem is b/w photos mixed with text. Setting the scanner to staic level black/white pixesl works great for text, but terrible for photos. Setting it to greyscale works great for photos, but terrible for text. Oh how I wish it would recognize greyscale areas and use greyscale for them, and black on white text areas and use static black/white pixels for them.