If you follow information security discussions on the internet, you might have heard that blurring an image is not a good way of redacting its contents.
2) The algorithm hinges on knowing the blur method and the boundary pixel values, so it works for digital blur, but is not suitable for "analog" use cases, such as sharpening blurry photos. This is typically done with more approximate deconvolution algorithms and the results are almost never as clean.
3) For readers wondering if the centered-window case is fundamentally harder - it isn't, it's just that the formulas are a tad messier and I wanted to keep the article easy to read. Here's the visual solution for a 100-element centered window: https://lcamtuf.coredump.cx/blog/venus-centered.png . Note that this has two alternating stripe patterns, one moving left-to-right and right-to-left, each accumulating separate quantization errors.
4) These line- / box- / cross-shaped filters aren't all that visually pleasant, so in photography and related fields, we use Gaussian blur or more complex models that mimic camera aperture. These filters are fundamentally still just weighted averages, just messier to analyze.
An cool way to approach this is as convolutional kernels. Because they can be invertible, and they are linear.
So if the blur is a linear operation B, you can find a linear operation A that undoes B. That isn't true for all operations B, but it works for most blurs.
Interestingly if you actually look at the kernel of image B it turns out to pretty much be a sharpening filter.
The math for Gaussian would be more complex, but fundamentally, it's still a weighted average, so it is probably vulnerable to some extent. Pixelation discards more data, so it should be markedly safer.
Some postscripts:
1) MATLAB source for the one-axis version: https://lcamtuf.coredump.cx/blog/venus.m
2) The algorithm hinges on knowing the blur method and the boundary pixel values, so it works for digital blur, but is not suitable for "analog" use cases, such as sharpening blurry photos. This is typically done with more approximate deconvolution algorithms and the results are almost never as clean.
3) For readers wondering if the centered-window case is fundamentally harder - it isn't, it's just that the formulas are a tad messier and I wanted to keep the article easy to read. Here's the visual solution for a 100-element centered window: https://lcamtuf.coredump.cx/blog/venus-centered.png . Note that this has two alternating stripe patterns, one moving left-to-right and right-to-left, each accumulating separate quantization errors.
4) These line- / box- / cross-shaped filters aren't all that visually pleasant, so in photography and related fields, we use Gaussian blur or more complex models that mimic camera aperture. These filters are fundamentally still just weighted averages, just messier to analyze.
5) If you're interested in the impact of various values of the bias parameter (B) on the 1D + 1D reconstruction, here's a quick demo: https://lcamtuf.coredump.cx/blog/venus-bias.png
An cool way to approach this is as convolutional kernels. Because they can be invertible, and they are linear.
So if the blur is a linear operation B, you can find a linear operation A that undoes B. That isn't true for all operations B, but it works for most blurs.
Interestingly if you actually look at the kernel of image B it turns out to pretty much be a sharpening filter.
OK, so is Gaussian blur or "pixelize" safer, then?
The math for Gaussian would be more complex, but fundamentally, it's still a weighted average, so it is probably vulnerable to some extent. Pixelation discards more data, so it should be markedly safer.
Excellent as always!
Amazing how almost perfect the reconstruction is. Perhaps this can be somehow used to help the blind.