PDA

View Full Version : Fast Blurring Algorithms in Managed Code?



eshbach
01-04-2007, 01:15 PM
Someone on another forum suggested an image host (to be used for certain types of images where content may be... suggestive) in which the image is hosted in both a normal form and a blurred version which could be used as a link.

I thought this sounded like a funny idea, so I wrote a quick page to do it and hosted on an unused domain I have. The page is here: http://aaron.doomray.com/blurhost

The problem is that I don't know much about Image Processing and my blurring algorithm is far too slow. For a large picture with a high "blur amount", it can take over a minute for the page to return. Now, part of the problem is the site hosting, which is a shared server, so I'm not getting all the CPU time I could. More importantly, the host does not allow me to use compiled DLL's or unmanaged code. That means I can't use a library blur function, and I can't use pointers to access the pixels in the bitmap (this is the real problem).

Since I can't use pointers, accessing each pixel takes far too long, so I'm looking for a way to blur an image without having to loop through each pixel multiple times (taking averages of the colors).

Here's my current blur method:



public Bitmap Blur(Bitmap image, int amount)
{
Int32 horz = amount;
Int32 vert = amount;

Single weightsum;
Single[] weights;

Bitmap t = image;
weights = new Single[horz * 2 + 1];
for (Int32 i = 0; i < horz * 2 + 1; i++)
{
Single y = Gauss(-horz + i, 0, horz);
weights[i] = y;
}
for (Int32 row = 0; row < image.Height; row++)
{
for (Int32 col = 0; col < image.Width; col++)
{
Double r = 0;
Double g = 0;
Double b = 0;
Double a = 0;
weightsum = 0;
for (Int32 i = 0; i < horz * 2 + 1; i++)
{
Int32 x = col - horz + i;
if (x < 0)
{
i += -x;
x = 0;
}
if (x > image.Width - 1)
break;
Color c = image.GetPixel(x, row);
r += c.R * weights[i] / 255.0 * c.A;
g += c.G * weights[i] / 255.0 * c.A;
b += c.B * weights[i] / 255.0 * c.A;
a += c.A * weights[i];
weightsum += weights[i];
}
r /= weightsum;
g /= weightsum;
b /= weightsum;
a /= weightsum;
Byte br = (Byte)Math.Round(r);
Byte bg = (Byte)Math.Round(g);
Byte bb = (Byte)Math.Round(b);
Byte ba = (Byte)Math.Round(a);
if (br > 255) br = 255;
if (bg > 255) bg = 255;
if (bb > 255) bb = 255;
if (ba > 255) ba = 255;
t.SetPixel(col, row, Color.FromArgb(ba, br, bg, bb));
}
}
// vertical blur

weights = new Single[vert * 2 + 1];
for (Int32 i = 0; i < vert * 2 + 1; i++)
{
Single y = Gauss(-vert + i, 0, vert);
weights[i] = y;
}

for (Int32 col = 0; col < image.Width; col++)
{
for (Int32 row = 0; row < image.Height; row++)
{
Double r = 0;
Double g = 0;
Double b = 0;
Double a = 0;
weightsum = 0;
for (Int32 i = 0; i < vert * 2 + 1; i++)
{
Int32 y = row - vert + i;
if (y < 0)
{
i += -y;
y = 0;
}
if (y > image.Height - 1)
break;
Color c = t.GetPixel(col, y);
r += c.R * weights[i] / 255.0 * c.A;
g += c.G * weights[i] / 255.0 * c.A;
b += c.B * weights[i] / 255.0 * c.A;
a += c.A * weights[i];
weightsum += weights[i];
}
r /= weightsum;
g /= weightsum;
b /= weightsum;
a /= weightsum;
Byte br = (Byte)Math.Round(r);
Byte bg = (Byte)Math.Round(g);
Byte bb = (Byte)Math.Round(b);
Byte ba = (Byte)Math.Round(a);
if (br > 255) br = 255;
if (bg > 255) bg = 255;
if (bb > 255) bb = 255;
if (ba > 255) ba = 255;
image.SetPixel(col, row, Color.FromArgb(ba, br, bg, bb));
}
}
return image;
}




If anyone can think of a faster way to do this that will work in ASP.NET, please let me know. There might be some way to do it using GDI+, but I'm not sure.

ahmad
01-04-2007, 08:17 PM
Blurring doesn't have to be done that way. You could make it blocky and actually cover a 5x5 area with that blurring colour and you speed up your algo by a factor of 25.

Or perhaps use amount to indicate how big each X*X blurring pixel is.

eshbach
01-04-2007, 08:39 PM
Blurring doesn't have to be done that way. You could make it blocky and actually cover a 5x5 area with that blurring colour and you speed up your algo by a factor of 25.

Or perhaps use amount to indicate how big each X*X blurring pixel is.

I was thinking about that, but I think it would be trickier around edges and such... it seems more like a fall-back plan. I'd like to keep from just dropping data.

I did make some speed improvements today, though. I buffered the pixels twice and then operated on the buffers instead of calling GetPixel and SetPixel all the time. I also switched to a slightly more simple algorithm.

cx323
01-08-2007, 02:35 PM
look into using lockbits and pointers, that should result in an appreciable performance increase

eshbach
01-08-2007, 04:01 PM
look into using lockbits and pointers, that should result in an appreciable performance increase

yea, i know. i said in the first post that i can't use pointers because my web host won't let me set the /unsafe compiler flag.

cx323
01-08-2007, 05:28 PM
o sorry about that. you can use marshal.copy for a smaller performance gain then

eshbach
01-08-2007, 06:08 PM
o sorry about that. you can use marshal.copy for a smaller performance gain then


I thought that might work, but it still throws a Security Exception. I think I must not have System.Security.Permissions.SecurityPermissionAttr ibute.UnmanagedCode.