(This may be obvious depending on the color you’re guessing, but in my case the color was quite gray and it took me a few guesses to notice this essential visual aid.)
It looks like it's using Euclidean RGB difference, so one potential approach if you don't want to eyeball it is to just try to try to find the match for each channel one by one.
Edit: Okay, my new best score was 100% on the 1st guess, thanks to the browser inspector. :-)
(Burnt orange, bright yellow, light blue, shock pink, dark blue)
Any way to explain that?
Also: it would be cool to have a way to forfeit and get the solution.
function score (r, g, b, rr, gg, bb) {
const maxRErr = Math.max(r, 15 - r)
const maxGErr = Math.max(g, 15 - g)
const maxBErr = Math.max(b, 15 - b)
const maxDist = Math.sqrt(maxRErr * maxRErr +
maxGErr * maxGErr +
maxBErr * maxBErr)
const rErr = Math.abs(rr - r)
const gErr = Math.abs(gg - g)
const bErr = Math.abs(bb - b)
const dist = Math.sqrt(rErr * rErr +
gErr * gErr +
bErr * bErr)
return Math.floor(100 * (1 - dist / maxDist))
}
Before even starting to count the error, it calculates the maximum error possible for the target colour. That makes sense! We want the percentages to feel more or less the same every round. Otherwise a medium grey would score relatively higher with every guess.But to answer your question, I don't think the local optimum situation you are describing is possible. I'm no math wizard but looking at this function there must (surely?) always be a direction to move one slider to get a higher score, unless you're bang on. So I think you just missed out on that one move, which does get ever more likely as your guesses get closer.
Android Chrome.
That's my best guess.
EDIT: But from reading the code there seems to be no such logic. Strange, because I observed the same thing, in Firefox on Android.
Pulling the blue slider down by a notch or two could have got you a perfect match.
Here is a demo of these colours if you want to see for yourself how similar they look:
You would need a server to generate the preview images ofc, but something like a subdomain redirect to a cloudflare worker or what-have-you could be sufficient. If done right the generated previews could be pretty small.
Hillclimbing is already somewhat efficient:
For each slider:
- Start at 0
- Move to the right until the score drops
- Move one to the left
That should result in something like 9 tries per slider on average, so 27 tries per color.One signal that could be used to improve it: The difference in score between 0 to 1 gives you the approximate length you have to move to the right.
Due to rounding, you don't get the exact length.
So My guess is that with an optimal strategy, on average you would need something like 4 tries per slider.
That comes down to and average of 12 tries per color.
var speed = 50
// Prime the result output
for (col of [rin, bin, gin]) {
rin.valueAsNumber = 0
bin.valueAsNumber = 0
gin.valueAsNumber = 0
}
rin.dispatchEvent(new Event('change'));
async function tryit(col, incr) {
// Increment a single color
col.valueAsNumber = col.valueAsNumber + incr
col.dispatchEvent(new Event('change'));
submit.click()
await (new Promise(resolve => setTimeout(resolve, speed)));
var res_text = result.innerText.split(/[ ()%]/)[4]
if (res_text === "Splendid!") {
throw new Error("Finished")
}
return (parseInt(res_text))
}
async function trymany() {
// We need to iterate at least twice due to rounding
// in result percentage, sometimes making neighbouring
// colors have the same result.
var last_res = 0, max_tries = 3;
while (--max_tries > 0) {
for (col of [rin, gin, bin]) {
while (true) {
var new_res = await tryit(col, 1)
if (last_res >= new_res) {
// set last value and break
await tryit(col, -1)
break
}
last_res = new_res
}
}
}
}
await trymany()
var speed = 500;
[rin, bin, gin].forEach(col => { col.valueAsNumber = 7; col.dispatchEvent(new Event('change')); col.score = 0; });
async function tryit(col, value) { col.valueAsNumber = value; col.dispatchEvent(new Event('change')); submit.click(); await new Promise(resolve => setTimeout(resolve, speed)); var res_text = result.innerText.split(/[ ()%]/)[4]; if (res_text === "Splendid!") { throw new Error("Finished - Found correct combination"); } col.score = parseInt(res_text); return col.score; }
async function binarySearch(col) { let start = 0; let end = 15; let mid = 7; let startAccuracy = await tryit(col, start); let endAccuracy = await tryit(col, end); let midAccuracy = 0;
while (true) {
mid = Math.floor((start + end) / 2);
midAccuracy = await tryit(col, mid);
if ((end - start) <= 2) {
const max = Math.max([startAccuracy, midAccuracy, endAccuracy]);
if (startAccuracy == max) await tryit(col, start);
else if (midAccuracy == max) await tryit(col, mid);
else await tryit(col, end);
return;
}
if (endAccuracy > startAccuracy) {
start = mid;
startAccuracy = midAccuracy;
} else {
end = mid;
endAccuracy = midAccuracy;
}
}
}
async function findOptimalCombination() {
for (const col of [rin, gin, bin]) {
await binarySearch(col);
}
/* rounding */
for (const col of [rin, gin, bin]) {
const mid = col.valueAsNumber;
const score = await tryit(col, mid);
const left = await tryit(col, mid - 1);
if (score >= left) {
const right = await tryit(col, mid + 1);
if (score >= right) await tryit(col, mid);
}
}
console.log("Optimization complete");
}await findOptimalCombination();
(Note that binary search does not apply here. This is searching for an extremum, not a zero point.)
- Start at the range of 0-F, measure the score of 6 and 9 (2 tries)
- Depending on which is higher, narrow the range to 0-9 or 6-F
- Suppose the range is 0-9, measure the score of 3 and 6 (1 try. 6 is already measured)
- Narrow the range to 0-6 or 3-9
- Suppose the range is 0-6, measure the score of 2 and 3 (1 try. 3 is already measured)
- The worse case is 3's score is higher. The range is now 2-6. Since 2, 3, 6 are all measured, in the worst case you need 2 more tries for 4 and 5.
- The other case is 2's score is higher. The range is now 0-3 and 0, 2, 3 are all measured.
So in worst case there are 6 tries per slider. ~5 tries on average. I suspect this can be further optimized but I'll stop here :)We can also probably prove that 3 guesses are not enough by some sort of adversarial argument. That is, instead of having the color be fixed at the start, imagine that the game picks the colors adversarially to try to make the job of the guesser as difficult as possible, while remaining consistent with the answers it has already given. If we can pick a function for the adversary that does not fully disambiguate the color completely for any sequence of three guesses, we will be done.
With the scaling, it can still be thought of as a problem of intersecting spheres, but the spheres begin to look quite strange since the distance function isn't symmetric. For example if your first guess was a corner of the cube, 1/8 of the space would have the same score (0%) since you've picked the furthest available point. This probably could be analyzed in the same way as above, since the 'spheres' centered at non-corner points aren't degenerate, and their intersections probably still have the right topological dimensions (dropping by 1 each time you add a new sphere, so 3 spheres are still likely to take you to a finite set of points in the continuous case) but that would require some work to prove.
Since we still need to consider the loss of precision from rounding, we can just look at this as a discrete problem and try to find a tuple of points that are sufficient to distinguish everything. It took a me couple of attempts, but the following 4 tetrahedrally arranged ones work: [11,7,4],[4,4,8],[11,8,11],[4,11,7].
There might be a smaller set that work, it would take ~2^48 work to exhaustively search for 3 points that could distinguish everything, and it might be possible to do better since you can choose the second point based on results from the first.
It can identify more than half of all colors in just 2 guesses. If you edit the code to minimize the size of the largest bucket instead, you get an algorithm that has a slightly worse average case, but never needs more than 4 guesses.
Paste it in the site's browser console to try.
function solve() {
let possibleTargets = new Array(0x1000).fill(0).map((_, i) => i);
while (true) {
// for each possible guess, calculate the score-buckets
const guessResults = possibleTargets.map((guess) => { // for better brute-force, iterate through all colors, but this performs better (probably because we don't reward for guessing correctly in the current turn)
const buckets = new Array(101).fill(0).map(() => []);
for (const possibleTarget of possibleTargets) {
buckets[calcScore(possibleTarget, guess)].push(possibleTarget);
}
return { guess, buckets };
});
// find guess with lowest variance
const best = guessResults.sort((a, b) => calcVariance(a.buckets) - calcVariance(b.buckets))[0];
// make the guess & update possible targets
const sc = makeGuess(best.guess);
if (sc >= 100) return "Success!";
possibleTargets = best.buckets[sc];
}
}
function calcVariance(buckets) { const maxBucketSize = Math.max(...buckets.map(b => b.length)); return buckets.reduce((acc, b) => acc + (b.length - maxBucketSize) ** 2, 0); }
function deconstruct(color) { return [(color & 0xF00) >> 8, (color & 0x0F0) >> 4, (color & 0x00F)]; }
function calcScore(target, guess) { return score(...deconstruct(target), ...deconstruct(guess)); }
function makeGuess(guess) { [rr, gg, bb] = deconstruct(guess); submitInput(); return score(r, g, b, rr, gg, bb); };
solve();
I'd like to try out a few alternatives in place of your variance function. Something like b.length*log(b.length) to estimate the expected number of guesses, and perhaps using a version of log closer to ceil(log(x)/log(100)).
Regarding computing it for all targets, I optimized it by precomputing the value of guessResults in the first iteration, since it's always the same (no matter the target color), which saves most of the computation. I removed the optimization so the code wouldn't be so long here.
As you can imagine I'm really popular at parties...
A muddy yellow reminiscent of the sweet and sour fruit found across Central America and the Southern States.
(Taken from a paint description)
I'm going on supermarket starfruit here so might be a bit off :)
I'm disgusted and enraged. Really liked the answers better.
It is good for people you know as well as people you don't know. The only situation I don't recommend it is if someone in your group wants to win something because the "win" factor is weak.
(y'know, as much as one's mind can be blown by trivia about a children's tv show)
This would of course just be the median colour. It'd be black and white film because it was cheaper, completely incorrectly exposed so that the blacks were bang at the bottom and the film would be much too high an ISO so it'd be grainy as hell. And there would be someone with a really bad hair cut smoking in it.
Source: still own my Zenit from back then :)
What camera / film might have been used for those sepia pictures I keep finding in my family’s old stuff?
Btw: That description with the black blacks was bang-on. Also the smoking, and the haircuts.
So not film or camera specific really.
I once read a blog post here about how Netscape interpreted colors that are words but aren’t in the official name list, and it comes down to tossing out the non-hex characters and padding/chunking the remaining characters to make RGB numbers, so “dumptruck” might end up being yellow because it ends up being DC0. I immediately wrote a little app that interpreted all the words in /usr/share/dict/words and stuck them in a sqlite db with Lab color representations so you could query for the nearest phony color word for a specific RGB you wanted. It just showed the 100 best matches sorted by closeness, written in their actual color. Fun little spur of the moment evening project.
This guy produces great things. He did that micro drawing language a while back, right?
Would also be cool to have a camera link where you can select the color to guess by pointing the camera at something.
I find this guy so inspiring. He codes and creates tools like I aspire to. Just beautiful stuff!
One day when I get out of the rat race side of software, I would love to just focus on this kind of stuff. So satisfying and creative and beautiful.
I use them in my code club to teach about what is light, actually, how do we perceive color, how tv and computer screens 'trick' your color perception by simply mixing RGB in the right proportions, etc.
I got the first one in 10 guesses, then I tapped New Game but the background didn't change and my next guess said NaN%
(Android Chrome)
(Windows Chrome)
I guess in this game you're guessing the mix of primary colors, so maybe it doesn't hold the same property of difficulty in deriving the constituents?
HSL is much more intuitive. As soon as you have an idea of the hue scale, it's very easy to define a color with saturation and lightness levels.
If you think RGB is hard to conceptualize, try Lab.
But really, good job! very nice game, fun and challenging
You play as RBG fighting monsters with different RGB values.
I just bought my first OLED phone, maybe it's time to play again.
Simple app but funny game.
Do people binary search or scan a range?, how do they prioritize the color channels? etc
Since I'm one of them, perhaps programmers?