写在前面

引导滤波是何恺明读博士的时候提出来的一种去噪保边算法,很有名。作者其主页上给出了该算法的Matlab实现和原文。而且他提出的基于暗通道去雾算法技惊四座,获CVPR2009最佳论文(膜拜),近几年在CV领域的成果也相当丰硕,关于他的研究动态,可以访问

http://kaiminghe.com/

优点:

1、应用面很广、很广;

2、能够克服双边滤波的梯度翻转现象,在滤波后图像的细节上更优;

3、最重要的优点,快,效率高,时间复杂度为O(N),N是像素个数,也就是说与掩膜窗口尺寸无关了,我们知道传统双边滤波效率是很低的。还有更快的Fast Guide Filter:https://arxiv.org/abs/1505.00996

应用:图像增强、图像融合、图像去雾、图像去噪、羽化、美颜、三维重建等等。

如果你仅仅只是需要运用这个算法,现在opencv 3.0和MATLAB 14都已经添加了guided filter的API,可以直接调用。

opencv中的API如下void cv::ximgproc::guidedFilter(),具体的可以参考opencv的帮助文档关于导向滤波的介绍guidedFilter。

但是需要提醒的是,opencv中guidedFilter()函数包含在ximgproc模块下,但是从官方下载的标准的opencv.exe程序中并不包含该模块,需要分别下载opencv的source文件和contrib模块的source文件,然后自己编译,具体可以参考opencv3.1.0+contrib模块编译总结。

 

原理

说实话,引导滤波的理论还是蛮深的,不容易讲清楚,我这小菜鸡看了原文,还看了很多博客,也是一知半解。好在作者在原文中给出了伪代码,程序实现起来比较简单,我基于OpenCV写了一个,经过测试,和作者的Matlab实现效果是一样的。

为了不误导读者,还是等我自己完全搞明白了,再详细写原理。这里就推荐几个原理写的好的博客吧:

1、https://www.cnblogs.com/riddick/p/8367591.html

2、白马负金羁-导向滤波(Guided Filter)的解析与实现

3、https://www.cnblogs.com/yzl050819/p/7515250.html

伪代码:

 

基于OpenCV的C++实现

实现分为两种:灰度图作为引导图、彩色图作为引导图,值得说明的是:两种实现的滤波图像都只能对单通道图像进行处理。若需要处理彩色图像,在main中只需将图像split,然后对各个通道分别滤波,最后merge就可以了。作者还在原文中指出,用彩色图作为引导图能更好地保持边缘细节,我后面的效果也确实印证了这一点。

灰度图作为引导图(main中可以处理彩色图像)

#include <iostream>
#include <opencv2/core.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/imgproc.hpp>
 
//
//   GUIDEDFILTER   O(1) time implementation of guided filter.
//   -guidance image : I(should be a gray - scale / single channel image)
//   -filtering input image : p(should be a gray - scale / single channel image)
//   -local window radius : r
//   -regularization parameter : eps
/
cv::Mat GuidedFilter(cv::Mat& I, cv::Mat& p, int r, double eps){
	int wsize = 2 * r + 1;
	//数据类型转换
	I.convertTo(I, CV_64F, 1.0 / 255.0);
	p.convertTo(p, CV_64F, 1.0 / 255.0);
 
	//meanI=fmean(I)
	cv::Mat mean_I;
	cv::boxFilter(I, mean_I, -1, cv::Size(wsize, wsize), cv::Point(-1, -1), true, cv::BORDER_REFLECT);//盒子滤波
 
	//meanP=fmean(P)
	cv::Mat mean_p;
	cv::boxFilter(p, mean_p, -1, cv::Size(wsize, wsize), cv::Point(-1, -1), true, cv::BORDER_REFLECT);//盒子滤波
 
	//corrI=fmean(I.*I)
	cv::Mat mean_II;
	mean_II = I.mul(I);
	cv::boxFilter(mean_II, mean_II, -1, cv::Size(wsize, wsize), cv::Point(-1, -1), true, cv::BORDER_REFLECT);//盒子滤波
 
	//corrIp=fmean(I.*p)
	cv::Mat mean_Ip;
	mean_Ip = I.mul(p);
	cv::boxFilter(mean_Ip, mean_Ip, -1, cv::Size(wsize, wsize), cv::Point(-1, -1), true, cv::BORDER_REFLECT);//盒子滤波
 
	//varI=corrI-meanI.*meanI
	cv::Mat var_I, mean_mul_I;
	mean_mul_I=mean_I.mul(mean_I);
	cv::subtract(mean_II, mean_mul_I, var_I);
 
	//covIp=corrIp-meanI.*meanp
	cv::Mat cov_Ip;
	cv::subtract(mean_Ip, mean_I.mul(mean_p), cov_Ip);
 
	//a=conIp./(varI+eps)
	//b=meanp-a.*meanI
	cv::Mat a, b;
	cv::divide(cov_Ip, (var_I+eps),a);
	cv::subtract(mean_p, a.mul(mean_I), b);
 
	//meana=fmean(a)
	//meanb=fmean(b)
	cv::Mat mean_a, mean_b;
	cv::boxFilter(a, mean_a, -1, cv::Size(wsize, wsize), cv::Point(-1, -1), true, cv::BORDER_REFLECT);//盒子滤波
	cv::boxFilter(b, mean_b, -1, cv::Size(wsize, wsize), cv::Point(-1, -1), true, cv::BORDER_REFLECT);//盒子滤波
 
	//q=meana.*I+meanb
	cv::Mat q;
	q = mean_a.mul(I) + mean_b;
 
	//数据类型转换
	I.convertTo(I, CV_8U, 255);
	p.convertTo(p, CV_8U, 255);
	q.convertTo(q, CV_8U, 255);
 
	return q;
 
}
 
int main(){
	cv::Mat src = cv::imread("I:\\Learning-and-Practice\\2019Change\\Image process algorithm\\Img\\woman.jpg");
	if (src.empty()){
		return -1;
	}
 
	//if (src.channels() > 1)  
	//	cv::cvtColor(src, src, CV_RGB2GRAY);
	
	//自编GuidedFilter测试
	double t2 = (double)cv::getTickCount(); //测时间
 
	cv::Mat dst1, src_input, I;
	src.copyTo(src_input);
	if (src.channels() > 1)
	   cv::cvtColor(src, I, CV_RGB2GRAY); //若引导图为彩色图,则转为灰度图
	std::vector<cv::Mat> p,q;
	if (src.channels() > 1){             //输入为彩色图
		cv::split(src_input, p);
		for (int i = 0; i < src.channels(); ++i){
			dst1 = GuidedFilter(I, p[i], 9, 0.1*0.1);
			q.push_back(dst1);
		}
		cv::merge(q, dst1);
	}
	else{                               //输入为灰度图
		src.copyTo(I);
		dst1 = GuidedFilter(I, src_input, 9, 0.1*0.1);
	}
 
	t2 = (double)cv::getTickCount() - t2;
	double time2 = (t2 *1000.) / ((double)cv::getTickFrequency());
	std::cout << "MyGuidedFilter_process=" << time2 << " ms. " << std::endl << std::endl;
 
	cv::namedWindow("GuidedImg", CV_WINDOW_NORMAL);
	cv::imshow("GuidedImg", I);
	cv::namedWindow("src", CV_WINDOW_NORMAL);
	cv::imshow("src", src);
	cv::namedWindow("GuidedFilter_box", CV_WINDOW_NORMAL);
	cv::imshow("GuidedFilter_box", dst1);
	cv::waitKey(0);
 
}

效果(核半径=9,eps=0.1*0.1)

             引导图                                                            输入图                                                             结果图

 

   引导图                                                            输入图                                                             结果图

 

 

彩色图作为引导图(只能处理单通道图像)

#include <iostream>
#include <opencv2/core.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/imgproc.hpp>
 
 
///
//   GUIDEDFILTER_COLOR   O(1) time implementation of guided filter using a color image as the guidance.
//
//   -guidance image : I(should be a color(RGB) image)
//	 -filtering input image : p(should be a gray - scale / single channel image)
//   -local window radius : r
//   -regularization parameter : eps
///
cv::Mat GuidedFilter_Color(cv::Mat& I, cv::Mat& p, int r, double eps ){
	int wsize = 2 * r + 1;
	//数据类型转换
	I.convertTo(I, CV_64F, 1.0 / 255.0);
	p.convertTo(p, CV_64F, 1.0 / 255.0);
	
	//引导图通道分离
	if (I.channels() == 1){
		std::cout<<"I should be a color(RGB) image "<<std::endl;
	}
	std::vector<cv::Mat> rgb;
	cv::split(I, rgb);
 
	//meanI=fmean(I)
	cv::Mat mean_I_r, mean_I_g, mean_I_b;
	cv::boxFilter(rgb[0], mean_I_b, -1, cv::Size(wsize, wsize), cv::Point(-1, -1), true, cv::BORDER_REFLECT);//盒子滤波
	cv::boxFilter(rgb[1], mean_I_g, -1, cv::Size(wsize, wsize), cv::Point(-1, -1), true, cv::BORDER_REFLECT);//盒子滤波
	cv::boxFilter(rgb[2], mean_I_r, -1, cv::Size(wsize, wsize), cv::Point(-1, -1), true, cv::BORDER_REFLECT);//盒子滤波
 
	//meanP=fmean(P)
	cv::Mat mean_p;
	cv::boxFilter(p, mean_p, -1, cv::Size(wsize, wsize), cv::Point(-1, -1), true, cv::BORDER_REFLECT);//盒子滤波
 
	//corrI=fmean(I.*I)
	cv::Mat mean_II_rr, mean_II_rg, mean_II_rb, mean_II_gb, mean_II_gg, mean_II_bb;
	cv::boxFilter(rgb[2].mul(rgb[2]), mean_II_rr, -1, cv::Size(wsize, wsize), cv::Point(-1, -1), true, cv::BORDER_REFLECT);//盒子滤波
	cv::boxFilter(rgb[2].mul(rgb[1]), mean_II_rg, -1, cv::Size(wsize, wsize), cv::Point(-1, -1), true, cv::BORDER_REFLECT);//盒子滤波
	cv::boxFilter(rgb[2].mul(rgb[0]), mean_II_rb, -1, cv::Size(wsize, wsize), cv::Point(-1, -1), true, cv::BORDER_REFLECT);//盒子滤波
	cv::boxFilter(rgb[1].mul(rgb[0]), mean_II_gb, -1, cv::Size(wsize, wsize), cv::Point(-1, -1), true, cv::BORDER_REFLECT);//盒子滤波
	cv::boxFilter(rgb[1].mul(rgb[1]), mean_II_gg, -1, cv::Size(wsize, wsize), cv::Point(-1, -1), true, cv::BORDER_REFLECT);//盒子滤波
	cv::boxFilter(rgb[0].mul(rgb[0]), mean_II_bb, -1, cv::Size(wsize, wsize), cv::Point(-1, -1), true, cv::BORDER_REFLECT);//盒子滤波
 
	//corrIp=fmean(I.*p)
	cv::Mat mean_Ip_r, mean_Ip_g, mean_Ip_b;
	mean_Ip_b = rgb[0].mul(p);
	mean_Ip_g = rgb[1].mul(p);
	mean_Ip_r = rgb[2].mul(p);
	cv::boxFilter(mean_Ip_b, mean_Ip_b, -1, cv::Size(wsize, wsize), cv::Point(-1, -1), true, cv::BORDER_REFLECT);//盒子滤波
	cv::boxFilter(mean_Ip_g, mean_Ip_g, -1, cv::Size(wsize, wsize), cv::Point(-1, -1), true, cv::BORDER_REFLECT);//盒子滤波
	cv::boxFilter(mean_Ip_r, mean_Ip_r, -1, cv::Size(wsize, wsize), cv::Point(-1, -1), true, cv::BORDER_REFLECT);//盒子滤波
 
	//covIp=corrIp-meanI.*meanp
	cv::Mat cov_Ip_r, cov_Ip_g, cov_Ip_b;
	cv::subtract(mean_Ip_r, mean_I_r.mul(mean_p), cov_Ip_r);
	cv::subtract(mean_Ip_g, mean_I_g.mul(mean_p), cov_Ip_g);
	cv::subtract(mean_Ip_b, mean_I_b.mul(mean_p), cov_Ip_b);
 
	//varI=corrI-meanI.*meanI
	//variance of I in each local patch : the matrix Sigma in Eqn(14).
	//Note the variance in each local patch is a 3x3 symmetric matrix :
	//           rr, rg, rb
	//   Sigma = rg, gg, gb
	//           rb, gb, bb
	cv::Mat var_I_rr, var_I_rg, var_I_rb, var_I_gb, var_I_gg, var_I_bb;
	cv::subtract(mean_II_rr, mean_I_r.mul(mean_I_r), var_I_rr);
	cv::subtract(mean_II_rg, mean_I_r.mul(mean_I_g), var_I_rg);
	cv::subtract(mean_II_rb, mean_I_r.mul(mean_I_b), var_I_rb);
	cv::subtract(mean_II_gb, mean_I_g.mul(mean_I_b), var_I_gb);
	cv::subtract(mean_II_gg, mean_I_g.mul(mean_I_g), var_I_gg);
	cv::subtract(mean_II_bb, mean_I_b.mul(mean_I_b), var_I_bb);
 
	//a=conIp./(varI+eps)
	int cols = p.cols;
	int rows = p.rows;
	cv::Mat Mat_a = cv::Mat::zeros(rows, cols,CV_64FC3);
	std::vector<cv::Mat> a;
	cv::split(Mat_a, a);
	double rr, rg, rb, gg, gb, bb;
	for (int i = 0; i < rows; ++i){
		for (int j = 0; j < cols; ++j){
			rr = var_I_rr.at<double>(i, j); rg = var_I_rg.at<double>(i, j); rb = var_I_rb.at<double>(i, j);
			gg = var_I_gg.at<double>(i, j); gb = var_I_gb.at<double>(i, j);
		    bb = var_I_bb.at<double>(i, j);
			cv::Mat sigma = (cv::Mat_<double>(3, 3) << rr, rg, rb,
													   rg, gg, gb,
				                                       rb, gb, bb);
			cv::Mat cov_Ip = (cv::Mat_<double>(1, 3) << cov_Ip_r.at<double>(i, j), cov_Ip_g.at<double>(i, j), cov_Ip_b.at<double>(i, j));
			cv::Mat eye = cv::Mat::eye(3, 3, CV_64FC1);
			sigma = sigma + eps*eye;
			cv::Mat sigma_inv = sigma.inv();//求逆矩阵
			cv::Mat tmp = cov_Ip*sigma_inv;
			a[2].at<double>(i, j) = tmp.at<double>(0, 0);//r
			a[1].at<double>(i, j) = tmp.at<double>(0, 1);//g
			a[0].at<double>(i, j) = tmp.at<double>(0, 2);//b
		}
	}
 
	//b=meanp-a.*meanI
	cv::Mat b = mean_p - a[0].mul(mean_I_b) - a[1].mul(mean_I_g) - a[2].mul(mean_I_r);
 
	//meana=fmean(a)
	//meanb=fmean(b)
	cv::Mat mean_a_r, mean_a_g, mean_a_b, mean_b;
	cv::boxFilter(a[0], mean_a_b, -1, cv::Size(wsize, wsize), cv::Point(-1, -1), true, cv::BORDER_REFLECT);//盒子滤波
	cv::boxFilter(a[1], mean_a_g, -1, cv::Size(wsize, wsize), cv::Point(-1, -1), true, cv::BORDER_REFLECT);//盒子滤波
	cv::boxFilter(a[2], mean_a_r, -1, cv::Size(wsize, wsize), cv::Point(-1, -1), true, cv::BORDER_REFLECT);//盒子滤波
	cv::boxFilter(b, mean_b, -1, cv::Size(wsize, wsize), cv::Point(-1, -1), true, cv::BORDER_REFLECT);//盒子滤波
 
	//q=meana.*I+meanb
	cv::Mat q = mean_a_r.mul(rgb[2]) + mean_a_g.mul(rgb[1]) + mean_a_b.mul(rgb[0]) + mean_b;
 
	//数据类型转换
	I.convertTo(I, CV_8UC3, 255);
	p.convertTo(p, CV_8U, 255);
	q.convertTo(q, CV_8U, 255);
 
	return q;
}
 
int main(){
	cv::Mat I = cv::imread("I:\\Learning-and-Practice\\2019Change\\Image process algorithm\\Img\\woman1.jpeg");
	cv::Mat P = cv::imread("I:\\Learning-and-Practice\\2019Change\\Image process algorithm\\Img\\woman1.jpeg");
	if (I.empty() || P.empty()){
		return -1;
	}
	if (P.channels() > 1)
		cv::cvtColor(P, P, CV_RGB2GRAY);
	
	//自编GuidedFilter测试
	double t2 = (double)cv::getTickCount(); //测时间
	cv::Mat q;
	q = GuidedFilter_Color(I, P, 9, 0.2*0.2);
	t2 = (double)cv::getTickCount() - t2;
	double time2 = (t2 *1000.) / ((double)cv::getTickFrequency());
	std::cout << "MyGuidedFilter_process=" << time2 << " ms. " << std::endl << std::endl;
 
	cv::namedWindow("GuidedImg");
	cv::imshow("GuidedImg", I);
	cv::namedWindow("src");
	cv::imshow("src", P);
	cv::namedWindow("GuidedFilter", CV_WINDOW_NORMAL);
	cv::imshow("GuidedFilter", q);
	cv::waitKey(0);
 
}

效果(核半径=9,eps=0.1*0.1)

                                                引导图                                                            输入图                                                             结果图

这里的结果图比用灰度图作为引导,边缘保持更好,可以仔细对比观察面部和鼻子。