http://www.myexception.cn/other/1397638.html
DShader之位移贴图(Displacement Mapping)
3DShader之移位贴图(Displacement Mapping)
我们知道法线贴图是只是改了物体的法线属性,用来计算光照,但是并没有改变物体本身的网格.但是移位贴图就不一样了,它会移动物体的顶点.我用移位贴图做了个海洋,好了,上了图再讲:
注意看海的边缘的顶点,已经实现了移动
最后,添加了一个笛卡尔转球形坐标的函数将其转为球形坐标,到时候我会提供球形版本的源码,如果需要平面的只需要在shader将调用这个函数的语句注释掉即可.
好了,不啰嗦了,困得不行了!
类似于法线贴图,移位贴图的每一个纹素中存储了一个向量,这个向量代表了对应顶点的位移。
注意,此处的纹素并不是与像素一一对应,而是与顶点一一对应,因此,纹理的纹素个数与网格的顶点个数是相等的。为什么必须相等,源代码里写
了原因的
在VS阶段,需要获取每个顶点对应的纹素中的位移向量. 注意只有3.0版本以上的vs才支持在VS阶段获取纹理数据,之前的版本只有ps才能获取纹理数据
[ http://msdn.microsoft.com/en-us/library/bb172930(v=vs.85).aspx
New Features
New features of vertex shader version vs_3_0 are listed in the following sections.
Indexing Registers
In the earlier shader models, only the constant register bank could be indexed. In this model, the following register banks can be indexed, using the loop counter register (aL):
- Input register (v#)
- Output register (o#)
Vertex Textures
This shader model supports texture lookup in the vertex shader using texldl. The vertex engine has four texture sampler stages (distinct from the displacement map sampler and the texture samplers in the pixel engine) that can be used to sample textures set at those stages. See Vertex Textures in vs_3_0 (DirectX HLSL).
]
好了,讲下这个项目的大概原理
这个项目,有两张移位纹理和两张法线纹理,目的就是为了进一步随机化.所以跟这里的实质内容关系不大,我会把它当成一张来讲
移位纹理可以存储需要移位的向量,而我们这里只存储了一个r通道,g,b通道都为1,所以我们的移位纹理储存的是[0,255]的随机值,注意移位纹理是128*128的,其实就是一个128*128的高度图.用的就是这个高度图去波动128*128网格的顶点的y值,当然采样高度图的坐标是移动的.
移动了顶点的y坐标,前面我们在法线贴图也提到过可以通过高度图生成法线纹理,这里一样,通过变化后的Y坐标也可以生成这个点的法向量.
这个法向量就用来算光照.当然这样算出来的光照很粗略,如下图所示:
所以,法线贴图又来了,这个是为了增加细节效果的,跟移位没有点关系,具体的方法是:将摄像机,灯光全转换到顶点的切线空间,然后通过纹理坐标对法线纹理进行采样,然后再以这三个变量来算光照.注意项目中的采样法线纹理的坐标经过缩放的,它重复了8次.
加上细节后:
好了,贴源代码
/*------------------------
DisplacementMapping.fx -- achieve displacement mapping
(c) Seamanj.2013/9/1
------------------------*/
#include "DXUT.h"
#include "resource.h" // phase1 : add camera
// phase2 : add sky box
// phase3 : add grid mesh
// phase4 : add FX
#define phase1 1
#define phase2 1
#define phase3 1
#define phase4 1
#if phase1
#include "DXUTcamera.h"
CFirstPersonCamera g_Camera;
#endif
#if phase2
// Vertex Buffer
LPDIRECT3DVERTEXBUFFER9 g_pVB = NULL;
// Index Buffer
LPDIRECT3DINDEXBUFFER9 g_pIB = NULL;
PDIRECT3DCUBETEXTURE9 g_pCubeTex = 0;
#endif
#if phase3
#include <vector>
int g_iVerticesNumPerRow = 128;
int g_iVerticesNumPerCol = 128;
//注意:这里的尺寸一定要给位移图片的大小一样,我原本也想过用129个顶点(第0到第128个),然后将第0和第128
//顶点的坐标和纹理都设为一样,纹理坐标u坐标设为0,这样是不行的,虽然看上去纹理重合,但并不是每一个顶点
//对应一个点,必然会有一个纹理坐标重合,这样就破坏了纹理的连续性,就相当于把纹理从中拉开了一个距离
//出来一样,并且这个距离平行U方向的颜色都一样.所以一旦位移纹理偏移就不会连续了,同理如果只有127个点
//那么纹理也会失去连续性
float g_fDeltaX = 0.25f;
float g_fDeltaZ = 0.25f; const float EPSILON = 0.001f;
IDirect3DVertexDeclaration9* g_pVertexDecl;
ID3DXMesh* g_pMesh;
// The two normal maps to scroll.
IDirect3DTexture9* g_pNormalTex1;
IDirect3DTexture9* g_pNormalTex2; // The two displacement maps to scroll.
IDirect3DTexture9* g_pDisplacementTex1;
IDirect3DTexture9* g_pDisplacementTex2; //===============================================================
// Colors and Materials const D3DXCOLOR WHITE(1.0f, 1.0f, 1.0f, 1.0f);
const D3DXCOLOR BLACK(0.0f, 0.0f, 0.0f, 1.0f);
const D3DXCOLOR RED(1.0f, 0.0f, 0.0f, 1.0f);
const D3DXCOLOR GREEN(0.0f, 1.0f, 0.0f, 1.0f);
const D3DXCOLOR BLUE(0.0f, 0.0f, 1.0f, 1.0f); struct DirectionalLight
{
D3DXCOLOR ambient;
D3DXCOLOR diffuse;
D3DXCOLOR specular;
D3DXVECTOR3 directionInWorld;
}; struct Material
{
Material()
:ambient(WHITE), diffuse(WHITE), specular(WHITE), specularPower(8.0f){}
Material(const D3DXCOLOR& a, const D3DXCOLOR& d,
const D3DXCOLOR& s, float power)
:ambient(a), diffuse(d), specular(s), specularPower(power){} D3DXCOLOR ambient;
D3DXCOLOR diffuse;
D3DXCOLOR specular;
float specularPower;
}; DirectionalLight g_structDirectionalLight; Material g_structMaterial; D3DXVECTOR2 g_scaleHeights = D3DXVECTOR2(0.7f, 1.1f);
float g_fTexScale = 8.0f; D3DXVECTOR2 g_normalMapVelocity1 = D3DXVECTOR2(0.05f, 0.07f);
D3DXVECTOR2 g_normalMapVelocity2 = D3DXVECTOR2(-0.01f, 0.13f);
D3DXVECTOR2 g_displacementMapVelocity1 = D3DXVECTOR2(0.012f, 0.015f);
D3DXVECTOR2 g_displacementMapVelocity2 = D3DXVECTOR2(0.014f, 0.05f); // Offset of normal maps for scrolling (vary as a function of time)
D3DXVECTOR2 g_normalMapOffset1(0.0f, 0.0f);
D3DXVECTOR2 g_normalMapOffset2(0.0f, 0.0f); // Offset of displacement maps for scrolling (vary as a function of time)
D3DXVECTOR2 g_displacementMapOffset1(0.0f, 0.0f);
D3DXVECTOR2 g_displacementMapOffset2(0.0f, 0.0f); #endif #if phase4
#include "SDKmisc.h"//加载文件时会用到
ID3DXEffect* g_pEffect = NULL; // D3DX effect interface
D3DXHANDLE g_hTech = 0;
D3DXHANDLE g_hWorld = 0;
D3DXHANDLE g_hWorldInv = 0;
D3DXHANDLE g_hWorldViewProj = 0;
D3DXHANDLE g_hEyePositionInWorld = 0;
D3DXHANDLE g_hDirectionalLightStruct = 0;
D3DXHANDLE g_hMaterialStruct = 0;
D3DXHANDLE g_hNormalTex1 = 0;
D3DXHANDLE g_hNormalTex2 = 0;
D3DXHANDLE g_hDisplacementTex1 = 0;
D3DXHANDLE g_hDisplacementTex2 = 0;
D3DXHANDLE g_hNormalOffset1 = 0;
D3DXHANDLE g_hNormalOffset2 = 0;
D3DXHANDLE g_hDisplacementOffset1 = 0;
D3DXHANDLE g_hDisplacementOffset2 = 0;
D3DXHANDLE g_hScaleHeights = 0; //缩放移位1,2纹理得到的高度
D3DXHANDLE g_hDelta = 0; //网格X,Z轴方向的步长 #endif
//--------------------------------------------------
// Rejects any D3D9 devices that aren't acceptable to the app by returning false
//--------------------------------------------------
bool CALLBACK IsD3D9DeviceAcceptable( D3DCAPS9* pCaps, D3DFORMAT AdapterFormat, D3DFORMAT BackBufferFormat,
bool bWindowed, void* pUserContext )
{
// Typically want to skip back buffer formats that don't support alpha blending
IDirect3D9* pD3D = DXUTGetD3D9Object();
if( FAILED( pD3D->CheckDeviceFormat( pCaps->AdapterOrdinal, pCaps->DeviceType,
AdapterFormat, D3DUSAGE_QUERY_POSTPIXELSHADER_BLENDING,
D3DRTYPE_TEXTURE, BackBufferFormat ) ) )
return false; return true;
} //--------------------------------------------------
// Before a device is created, modify the device settings as needed
//--------------------------------------------------
bool CALLBACK ModifyDeviceSettings( DXUTDeviceSettings* pDeviceSettings, void* pUserContext )
{
#if phase1
pDeviceSettings->d3d9.pp.PresentationInterval = D3DPRESENT_INTERVAL_IMMEDIATE;
#endif
return true;
}
#if phase3
//create Vertex Declaration
HRESULT createVertexDeclaration( IDirect3DDevice9* pd3dDevice )
{ D3DVERTEXELEMENT9 decl[] =
{
// offsets in bytes
{0, 0, D3DDECLTYPE_FLOAT3, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_POSITION, 0},
{0, 12, D3DDECLTYPE_FLOAT2, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_TEXCOORD, 0},
{0, 20, D3DDECLTYPE_FLOAT2, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_TEXCOORD, 1},
D3DDECL_END()
}; return pd3dDevice->CreateVertexDeclaration(decl, &g_pVertexDecl); }
struct GridVertexFormat
{
D3DXVECTOR3 pos;
D3DXVECTOR2 scaledTexCoord; // [a, b]
D3DXVECTOR2 normalizedTexCoord; // [0, 1]
};
//将顶点位置以及索引写入指定的vector中,为后面写入缓冲区打下基础
void writeVertexPosAndIndicesToVectors(int iVerticesNumPerRow, int iVerticesNumPerCol,float fDeltaX, float fDeltaZ,
const D3DXVECTOR3& center,std::vector<D3DXVECTOR3>& vecVertexPosData,
std::vector<DWORD>& vecIndexData)
{
int iVerticesNum = (iVerticesNumPerRow) * (iVerticesNumPerCol);
int iCellsNumPerRow = iVerticesNumPerRow - 1;
int iCellsNumPerCol = iVerticesNumPerCol - 1; int iTrianglesNum = iCellsNumPerRow * iCellsNumPerCol * 2; float fWidth = (float)iCellsNumPerCol * fDeltaX;
float fDepth = (float)iCellsNumPerRow * fDeltaZ; //===========================================
// Build vertices. // We first build the grid geometry centered about the origin and on
// the xz-plane, row-by-row and in a top-down fashion. We then translate
// the grid vertices so that they are centered about the specified
// parameter 'center'.从左上角开始,列序优先的方式放置顶点数据 vecVertexPosData.resize( iVerticesNum ); // Offsets to translate grid from quadrant 4 to center of
// coordinate system.
float fOffsetX = -fWidth * 0.5f;
float fOffsetZ = fDepth * 0.5f; int k = 0;
for(int i = 0; i < iVerticesNumPerRow; ++i)
{
for(int j = 0; j < iVerticesNumPerCol; ++j)
{
// Negate the depth coordinate to put in quadrant four.
// Then offset to center about coordinate system.
vecVertexPosData[k].x = j * fDeltaX + fOffsetX;
vecVertexPosData[k].z = -i * fDeltaZ + fOffsetZ;
vecVertexPosData[k].y = 0.0f; // Translate so that the center of the grid is at the
// specified 'center' parameter.
D3DXMATRIX T;
D3DXMatrixTranslation(&T, center.x, center.y, center.z);
D3DXVec3TransformCoord(&vecVertexPosData[k], &vecVertexPosData[k], &T); ++k; // Next vertex
}
} //===========================================
// Build indices. vecIndexData.resize(iTrianglesNum * 3); // Generate indices for each quad.
k = 0;
for(DWORD i = 0; i < (DWORD)iCellsNumPerRow; ++i)
{
for(DWORD j = 0; j < (DWORD)iCellsNumPerCol; ++j)
{
vecIndexData[k] = i * iVerticesNumPerCol + j;
vecIndexData[k + 1] = i * iVerticesNumPerCol + j + 1;
vecIndexData[k + 2] = (i+1) * iVerticesNumPerCol + j; vecIndexData[k + 3] = (i+1) * iVerticesNumPerCol + j;
vecIndexData[k + 4] = i * iVerticesNumPerCol + j + 1;
vecIndexData[k + 5] = (i+1) * iVerticesNumPerCol + j + 1; // 1---2 5
// | / / |
// | / / |
// 3 4---6
// next quad
k += 6;
}
}
} #endif
//--------------------------------------------------
// Create any D3D9 resources that will live through a device reset (D3DPOOL_MANAGED)
// and aren't tied to the back buffer size
//--------------------------------------------------
HRESULT CALLBACK OnD3D9CreateDevice( IDirect3DDevice9* pd3dDevice, const D3DSURFACE_DESC* pBackBufferSurfaceDesc,
void* pUserContext )
{
#if phase3
HRESULT hr;
#endif
#if phase1
// Setup the camera's view parameters
D3DXVECTOR3 vecEye( 0.0f, 0.0f, -5.0f );
D3DXVECTOR3 vecAt ( 0.0f, 0.0f, -0.0f );
g_Camera.SetViewParams( &vecEye, &vecAt );
FLOAT fObjectRadius=1;
//摄像机缩放的3个参数
//g_Camera.SetRadius( fObjectRadius * 3.0f, fObjectRadius * 0.5f, fObjectRadius * 10.0f );
g_Camera.SetEnablePositionMovement( true );
#endif
#if phase2
D3DXCreateCubeTextureFromFile(pd3dDevice, L"grassenvmap1024.dds", &g_pCubeTex);
#endif
#if phase3
createVertexDeclaration(pd3dDevice);
DWORD dwTrianglesNum = (g_iVerticesNumPerRow - 1) * (g_iVerticesNumPerCol - 1) * 2;
DWORD dwVerticesNum = g_iVerticesNumPerRow * g_iVerticesNumPerCol;
D3DVERTEXELEMENT9 elems[MAX_FVF_DECL_SIZE];
UINT uElemsNum = 0;
g_pVertexDecl->GetDeclaration(elems, &uElemsNum);
V( D3DXCreateMesh(dwTrianglesNum, dwVerticesNum, D3DXMESH_MANAGED, elems, pd3dDevice, &g_pMesh) );
//===============================================================
// Write the grid vertices and triangles to the mesh.
GridVertexFormat *pVertices = NULL;
V( g_pMesh->LockVertexBuffer(0, (void**)&pVertices) );
std::vector<D3DXVECTOR3> vecVertexPosData;
std::vector<DWORD> vecIndexData;
writeVertexPosAndIndicesToVectors(g_iVerticesNumPerRow, g_iVerticesNumPerCol, g_fDeltaX,
g_fDeltaZ, D3DXVECTOR3(0.0f, 0.0f, 0.0f), vecVertexPosData, vecIndexData);
for(int i = 0; i < g_iVerticesNumPerRow; ++i)
{
for(int j = 0; j < g_iVerticesNumPerCol; ++j)
{
DWORD index = i * g_iVerticesNumPerCol + j;
pVertices[index].pos = vecVertexPosData[index];
//注意每步步长为1/127,最后一个点的U纹理坐标为1,第一个点的U纹理坐标为0,实现了无缝结合
//有人会问为什么129个点就不行呢?因为纹理只有128个点的位置,129个点的话,必然会有一个纹理
//坐标重合,这样就破坏了纹理的连续性,就相当于把纹理从中拉开了一个距离出来一样,并且这个
//距离平行U方向的颜色都一样.所以一旦位移纹理偏移就不会连续了
pVertices[index].scaledTexCoord = D3DXVECTOR2((float)j / (g_iVerticesNumPerCol-1),
(float)i / (g_iVerticesNumPerRow-1))* g_fTexScale;
//g_fTexScale表示Grid里面用几个纹理,值越大表示纹理重复次数越多
pVertices[index].normalizedTexCoord = D3DXVECTOR2((float)j / (g_iVerticesNumPerCol-1) ,
(float)i / (g_iVerticesNumPerRow-1));//注意这里应该减1,好实现首尾衔接,当然纹理的V
//坐标不用衔接,如果不减1,那么纹理将不会衔接就相当于只用了纹理的前127个像素,虽然没有影响
//纹理的连续性,但是会影响衔接性 }
}
V( g_pMesh->UnlockVertexBuffer() );
//===============================================================
// Write triangle data(Indices & Attributes) so we can compute normals.
WORD* pIndices = NULL;
V( g_pMesh->LockIndexBuffer(0, (void**)&pIndices) );
DWORD* pAttributes = NULL;
V( g_pMesh->LockAttributeBuffer(0, &pAttributes) );
for(UINT i = 0; i < g_pMesh->GetNumFaces(); ++i)
{
pIndices[i*3+0] = (WORD)vecIndexData[i*3+0];
pIndices[i*3+1] = (WORD)vecIndexData[i*3+1];
pIndices[i*3+2] = (WORD)vecIndexData[i*3+2]; pAttributes[i] = 0; // All in subset 0.
} V( g_pMesh->UnlockIndexBuffer() );
V( g_pMesh->UnlockAttributeBuffer() );
//===============================================================
// Optimize for the vertex cache and build attribute table.
// D3DXMESHOPT_ATTRSORT optimization builds an attribute table.
DWORD* adj = new DWORD[g_pMesh->GetNumFaces() * 3];
// Generate adjacency info
V(g_pMesh->GenerateAdjacency(EPSILON, adj));
V(g_pMesh->OptimizeInplace(D3DXMESHOPT_VERTEXCACHE|D3DXMESHOPT_ATTRSORT,adj, 0, 0, 0));
delete[] adj;
//===============================================================
// Create textures.
V(D3DXCreateTextureFromFile(pd3dDevice, L"normal_map1.dds", &g_pNormalTex1));
V(D3DXCreateTextureFromFile(pd3dDevice, L"normal_map2.dds", &g_pNormalTex2));
V(D3DXCreateTextureFromFileEx(pd3dDevice, L"displacement_map1.dds", g_iVerticesNumPerRow,
g_iVerticesNumPerCol,1, 0, D3DFMT_R32F, D3DPOOL_MANAGED, D3DX_DEFAULT, D3DX_DEFAULT,
0, 0, 0, &g_pDisplacementTex1));
V(D3DXCreateTextureFromFileEx(pd3dDevice, L"displacement_map2.dds", g_iVerticesNumPerRow,
g_iVerticesNumPerCol,1, 0, D3DFMT_R32F, D3DPOOL_MANAGED, D3DX_DEFAULT, D3DX_DEFAULT,
0, 0, 0, &g_pDisplacementTex2));
#endif
#if phase4
WCHAR str[MAX_PATH];
// Read the D3DX effect file
V_RETURN( DXUTFindDXSDKMediaFileCch( str, MAX_PATH, L"DisplacementMapping.fx" ) );
// Create the effect
LPD3DXBUFFER pErrorBuff = NULL;
V_RETURN( D3DXCreateEffectFromFile(
pd3dDevice, // associated device
str, // effect filename
NULL, // no preprocessor definitions
NULL, // no ID3DXInclude interface
D3DXSHADER_DEBUG, // compile flags
NULL, // don't share parameters
&g_pEffect, // return effect
&pErrorBuff // return error messages
) );
if( pErrorBuff )
MessageBoxA(0, (char*)pErrorBuff->GetBufferPointer(), 0, 0);
// get handle
g_hTech = g_pEffect->GetTechniqueByName("myTech");
g_hWorld = g_pEffect->GetParameterByName(0, "g_mWorld");
g_hWorldInv = g_pEffect->GetParameterByName(0, "g_mWorldInv");
g_hWorldViewProj = g_pEffect->GetParameterByName(0, "g_mWorldViewProj");
g_hEyePositionInWorld = g_pEffect->GetParameterByName(0, "g_eyePositionInWorld");
g_hDirectionalLightStruct = g_pEffect->GetParameterByName(0, "g_structDirectionalLight");
g_hMaterialStruct = g_pEffect->GetParameterByName(0, "g_structMaterial"); g_hNormalTex1 = g_pEffect->GetParameterByName(0, "g_texNormalMap1");
g_hNormalTex2 = g_pEffect->GetParameterByName(0, "g_texNormalMap2");
g_hDisplacementTex1 = g_pEffect->GetParameterByName(0, "g_texDisplacementMap1");
g_hDisplacementTex2 = g_pEffect->GetParameterByName(0, "g_texDisplacementMap2");
g_hNormalOffset1 = g_pEffect->GetParameterByName(0, "g_normalOffset1");
g_hNormalOffset2 = g_pEffect->GetParameterByName(0, "g_normalOffset2");
g_hDisplacementOffset1 = g_pEffect->GetParameterByName(0, "g_DisplacementOffset1");
g_hDisplacementOffset2 = g_pEffect->GetParameterByName(0, "g_DisplacementOffset2");
g_hScaleHeights = g_pEffect->GetParameterByName(0, "g_scaleHeights");
g_hDelta = g_pEffect->GetParameterByName(0, "g_delta"); // We don't need to set these every frame since they do not change. V(g_pEffect->SetTechnique(g_hTech));
V(g_pEffect->SetTexture(g_hNormalTex1, g_pNormalTex1));
V(g_pEffect->SetTexture(g_hNormalTex2, g_pNormalTex2));
V(g_pEffect->SetTexture(g_hDisplacementTex1, g_pDisplacementTex1));
V(g_pEffect->SetTexture(g_hDisplacementTex2, g_pDisplacementTex2)); g_structDirectionalLight.directionInWorld = D3DXVECTOR3(0.0f, -1.0f, -3.0f);
D3DXVec3Normalize(&g_structDirectionalLight.directionInWorld, &g_structDirectionalLight.directionInWorld);
g_structDirectionalLight.ambient = D3DXCOLOR(0.3f, 0.3f, 0.3f, 1.0f);
g_structDirectionalLight.diffuse = D3DXCOLOR(1.0f, 1.0f, 1.0f, 1.0f);
g_structDirectionalLight.specular = D3DXCOLOR(0.7f, 0.7f, 0.7f, 1.0f); V(g_pEffect->SetValue(g_hDirectionalLightStruct, &g_structDirectionalLight, sizeof(DirectionalLight))); g_structMaterial.ambient = D3DXCOLOR(0.4f, 0.4f, 0.7f, 0.0f);
g_structMaterial.diffuse = D3DXCOLOR(0.4f, 0.4f, 0.7f, 1.0f);
g_structMaterial.specular = 0.8f*WHITE;
g_structMaterial.specularPower = 128.0f; V(g_pEffect->SetValue(g_hMaterialStruct, &g_structMaterial, sizeof(Material)));
V(g_pEffect->SetValue(g_hScaleHeights, &g_scaleHeights, sizeof(D3DXVECTOR2)));
D3DXVECTOR2 delta(g_fDeltaX, g_fDeltaZ);
V(g_pEffect->SetValue(g_hDelta, &delta, sizeof(D3DXVECTOR2))); #endif
return S_OK;
} #if phase2
struct MyVertexFormat
{
FLOAT x, y, z;
FLOAT u, v, w;
};
#define FVF_VERTEX (D3DFVF_XYZ | D3DFVF_TEX1 | D3DFVF_TEXCOORDSIZE3(0) )
static HRESULT initVertexIndexBuffer(IDirect3DDevice9* pd3dDevice)
{
static const MyVertexFormat Vertices[] =
{
//+X
{ 1, -1, -1, 1, -1, -1 },
{ 1, -1, 1, 1, -1, 1 },
{ 1, 1, 1 , 1, 1, 1 },
{ 1, 1, -1, 1, 1, -1 },
//-X
{ -1, -1, -1,-1, -1, -1 },
{ -1, 1, -1, -1, 1, -1 },
{ -1, 1, 1 , -1, 1, 1 },
{ -1, -1, 1, -1, -1, 1},
//+Y
{ -1, 1, -1 ,-1, 1, -1 },
{ 1, 1, -1 ,1, 1, -1 },
{ 1, 1, 1 ,1, 1, 1},
{ -1, 1, 1 ,-1, 1, 1 },
//-Y
{ -1, -1, -1,-1, -1, -1 },
{ -1, -1, 1,-1, -1, 1 },
{ 1, -1, 1,1, -1, 1 },
{ 1, -1, -1, 1, -1, -1},
//Z
{ -1, -1, 1,-1, -1, 1 },
{ -1, 1, 1,-1, 1, 1 },
{ 1, 1, 1,1, 1, 1 },
{ 1, -1, 1,1, -1, 1 },
//-Z
{ -1, -1, -1,-1, -1, -1 },
{ 1, -1, -1,1, -1, -1 },
{ 1, 1, -1,1, 1, -1 },
{ -1, 1, -1,-1, 1, -1 }
};
if (FAILED(pd3dDevice->CreateVertexBuffer(sizeof(Vertices),
0, FVF_VERTEX,
D3DPOOL_DEFAULT,
&g_pVB, NULL))) {
return E_FAIL;
}
void* pVertices;
if (FAILED(g_pVB->Lock(0, 0, /* map entire buffer */
&pVertices, 0))) {
return E_FAIL;
}
memcpy(pVertices, Vertices, sizeof(Vertices));
g_pVB->Unlock(); // Create and initialize index buffer
static const WORD Indices[] =
{
0, 1, 2,
0, 2, 3, 4, 5, 6,
4, 6, 7, 8, 9, 10,
8,10, 11, 12,13,14,
12,14,15, 16,17,18,
16,18,19, 20,21,22,
20,22,23
};
if (FAILED(pd3dDevice->CreateIndexBuffer(sizeof(Indices),
D3DUSAGE_WRITEONLY,
D3DFMT_INDEX16,
D3DPOOL_DEFAULT,
&g_pIB, NULL))) {
return E_FAIL;
}
void* pIndices;
if (FAILED(g_pIB->Lock(0, 0, /* map entire buffer */
&pIndices, 0))) {
return E_FAIL;
}
memcpy(pIndices, Indices, sizeof(Indices));
g_pIB->Unlock();
return S_OK;
} #endif
//--------------------------------------------------
// Create any D3D9 resources that won't live through a device reset (D3DPOOL_DEFAULT)
// or that are tied to the back buffer size
//--------------------------------------------------
HRESULT CALLBACK OnD3D9ResetDevice( IDirect3DDevice9* pd3dDevice, const D3DSURFACE_DESC* pBackBufferSurfaceDesc,
void* pUserContext )
{
#if phase1
pd3dDevice->SetRenderState( D3DRS_CULLMODE, D3DCULL_NONE );
//关闭光照处理, 默认情况下启用光照处理
pd3dDevice->SetRenderState( D3DRS_LIGHTING, FALSE );
//Setup the camera's projection parameters
float fAspectRatio = pBackBufferSurfaceDesc->Width / ( FLOAT )pBackBufferSurfaceDesc->Height; g_Camera.SetProjParams( D3DX_PI / 2, fAspectRatio, 0.1f, 5000.0f ); #endif
#if phase4
#if phase3
HRESULT hr;
if( g_pEffect )
V_RETURN( g_pEffect->OnResetDevice() );
#endif
#endif
#if !phase2
return S_OK;
#else
return initVertexIndexBuffer(pd3dDevice);
#endif
} //--------------------------------------------------
// Handle updates to the scene. This is called regardless of which D3D API is used
//--------------------------------------------------
void CALLBACK OnFrameMove( double fTime, float fElapsedTime, void* pUserContext )
{
#if phase1
g_Camera.FrameMove( fElapsedTime );
#endif
#if phase4
g_normalMapOffset1 += g_normalMapVelocity1 * fElapsedTime;
g_normalMapOffset2 += g_normalMapVelocity2 * fElapsedTime;
g_displacementMapOffset1 += g_displacementMapVelocity1 * fElapsedTime;
//g_displacementMapOffset2 += g_displacementMapVelocity2 * fElapsedTime; if(g_normalMapOffset1.x >= 1.0f || g_normalMapOffset1.x <= -1.0f)
g_normalMapOffset1.x = 0.0f;
if(g_normalMapOffset2.x >= 1.0f || g_normalMapOffset2.x <= -1.0f)
g_normalMapOffset2.x = 0.0f;
if(g_normalMapOffset1.y >= 1.0f || g_normalMapOffset1.y <= -1.0f)
g_normalMapOffset1.y = 0.0f;
if(g_normalMapOffset2.y >= 1.0f || g_normalMapOffset2.y <= -1.0f)
g_normalMapOffset2.y = 0.0f; if(g_displacementMapOffset1.x >= 1.0f || g_displacementMapOffset1.x <= -1.0f)
g_displacementMapOffset1.x = 0.0f;
if(g_displacementMapOffset2.x >= 1.0f || g_displacementMapOffset2.x <= -1.0f)
g_displacementMapOffset2.x = 0.0f;
if(g_displacementMapOffset1.y >= 1.0f || g_displacementMapOffset1.y <= -1.0f)
g_displacementMapOffset1.y = 0.0f;
if(g_displacementMapOffset2.y >= 1.0f || g_displacementMapOffset2.y <= -1.0f)
g_displacementMapOffset2.y = 0.0f; #endif
} //--------------------------------------------------
// Render the scene using the D3D9 device
//--------------------------------------------------
void CALLBACK OnD3D9FrameRender( IDirect3DDevice9* pd3dDevice, double fTime, float fElapsedTime, void* pUserContext )
{
HRESULT hr; // Clear the render target and the zbuffer
V( pd3dDevice->Clear( 0, NULL, D3DCLEAR_TARGET | D3DCLEAR_ZBUFFER, D3DCOLOR_ARGB( 0, 45, 50, 170 ), 1.0f, 0 ) ); // Render the scene
if( SUCCEEDED( pd3dDevice->BeginScene() ) )
{
#if phase2 pd3dDevice->SetRenderState(D3DRS_LIGHTING, false);
// Set world matrix
D3DXMATRIX M;
D3DXMatrixIdentity( &M ); // M = identity matrix
D3DXMatrixScaling(&M,2000, 2000, 2000);
pd3dDevice->SetTransform(D3DTS_WORLD, &M) ;
// Set view matrix
D3DXMATRIX view = *g_Camera.GetViewMatrix() ;
pd3dDevice->SetTransform(D3DTS_VIEW, &view) ;
// Set projection matrix
D3DXMATRIX proj = *g_Camera.GetProjMatrix() ;
pd3dDevice->SetRenderState(D3DRS_CULLMODE, D3DCULL_CCW);
pd3dDevice->SetTransform(D3DTS_PROJECTION, &proj) ;
pd3dDevice->SetStreamSource(0, g_pVB, 0, sizeof(MyVertexFormat));
pd3dDevice->SetIndices(g_pIB);//sets the current index buffer.
pd3dDevice->SetFVF(FVF_VERTEX);//Sets the current vertex stream declaration.
pd3dDevice->SetTexture(0, g_pCubeTex);
pd3dDevice->DrawIndexedPrimitive(D3DPT_TRIANGLELIST, 0, 0, 24, 0, 12); #endif
#if phase3
D3DXMatrixIdentity( &M ); // M = identity matrix
pd3dDevice->SetTransform(D3DTS_WORLD, &M) ;
pd3dDevice->SetTexture(0, 0);
g_pMesh->DrawSubset(0);
#endif
#if phase4
pd3dDevice->SetRenderState(D3DRS_CULLMODE, D3DCULL_NONE);
D3DXMATRIXA16 mWorld, mWorldInv;
D3DXMatrixIdentity(&mWorld);
V(g_pEffect->SetMatrix(g_hWorld, &mWorld));
D3DXMatrixInverse(&mWorldInv, 0, &mWorld);
V(g_pEffect->SetMatrix(g_hWorldInv, &mWorldInv));
//set WorldViewProject matrix
D3DXMATRIXA16 mWorldViewProjection;
mWorldViewProjection = *g_Camera.GetViewMatrix() * *g_Camera.GetProjMatrix();
V( g_pEffect->SetMatrix( g_hWorldViewProj, &mWorldViewProjection) );
V( g_pEffect->SetFloatArray( g_hEyePositionInWorld, (const float*)(g_Camera.GetEyePt()), 3) );
V(g_pEffect->SetValue(g_hNormalOffset1, &g_normalMapOffset1, sizeof(D3DXVECTOR2)));
V(g_pEffect->SetValue(g_hNormalOffset2, &g_normalMapOffset2, sizeof(D3DXVECTOR2)));
V(g_pEffect->SetValue(g_hDisplacementOffset1, &g_displacementMapOffset1, sizeof(D3DXVECTOR2)));
V(g_pEffect->SetValue(g_hDisplacementOffset2, &g_displacementMapOffset2, sizeof(D3DXVECTOR2)));
UINT iPass, cPasses;
V( g_pEffect->Begin( &cPasses, 0 ) );
for( iPass = 0; iPass < cPasses ; iPass++ )
{
V( g_pEffect->BeginPass( iPass ) );
g_pMesh->DrawSubset(0);
V( g_pEffect->EndPass() );
}
V( g_pEffect->End() );
#endif V( pd3dDevice->EndScene() );
}
} //--------------------------------------------------
// Handle messages to the application
//--------------------------------------------------
LRESULT CALLBACK MsgProc( HWND hWnd, UINT uMsg, WPARAM wParam, LPARAM lParam,
bool* pbNoFurtherProcessing, void* pUserContext )
{
#if phase1
g_Camera.HandleMessages( hWnd, uMsg, wParam, lParam );
#endif
return 0;
} //--------------------------------------------------
// Release D3D9 resources created in the OnD3D9ResetDevice callback
//--------------------------------------------------
void CALLBACK OnD3D9LostDevice( void* pUserContext )
{
#if phase2
SAFE_RELEASE(g_pVB);
SAFE_RELEASE(g_pIB);
#endif
#if phase3
SAFE_RELEASE(g_pVertexDecl);
#endif
#if phase4
if( g_pEffect )
g_pEffect->OnLostDevice();
#endif
} //--------------------------------------------------
// Release D3D9 resources created in the OnD3D9CreateDevice callback
//--------------------------------------------------
void CALLBACK OnD3D9DestroyDevice( void* pUserContext )
{
#if phase2
SAFE_RELEASE(g_pCubeTex);
#endif
#if phase3
SAFE_RELEASE(g_pMesh);
SAFE_RELEASE(g_pNormalTex1);
SAFE_RELEASE(g_pNormalTex2);
SAFE_RELEASE(g_pDisplacementTex1);
SAFE_RELEASE(g_pDisplacementTex2);
#endif
#if phase4
SAFE_RELEASE(g_pEffect);
#endif
} //--------------------------------------------------
// Initialize everything and go into a render loop
//--------------------------------------------------
INT WINAPI wWinMain( HINSTANCE, HINSTANCE, LPWSTR, int )
{
// Enable run-time memory check for debug builds.
#if defined(DEBUG) | defined(_DEBUG)
_CrtSetDbgFlag( _CRTDBG_ALLOC_MEM_DF | _CRTDBG_LEAK_CHECK_DF );
#endif // Set the callback functions
DXUTSetCallbackD3D9DeviceAcceptable( IsD3D9DeviceAcceptable );
DXUTSetCallbackD3D9DeviceCreated( OnD3D9CreateDevice );
DXUTSetCallbackD3D9DeviceReset( OnD3D9ResetDevice );
DXUTSetCallbackD3D9FrameRender( OnD3D9FrameRender );
DXUTSetCallbackD3D9DeviceLost( OnD3D9LostDevice );
DXUTSetCallbackD3D9DeviceDestroyed( OnD3D9DestroyDevice );
DXUTSetCallbackDeviceChanging( ModifyDeviceSettings );
DXUTSetCallbackMsgProc( MsgProc );
DXUTSetCallbackFrameMove( OnFrameMove ); // TODO: Perform any application-level initialization here // Initialize DXUT and create the desired Win32 window and Direct3D device for the application
DXUTInit( true, true ); // Parse the command line and show msgboxes
DXUTSetHotkeyHandling( true, true, true ); // handle the default hotkeys
DXUTSetCursorSettings( true, true ); // Show the cursor and clip it when in full screen
DXUTCreateWindow( L"3D_Shader_DisplacementMapping" );
DXUTCreateDevice( true, 1024, 768 ); // Start the render loop
DXUTMainLoop(); // TODO: Perform any application-level cleanup here return DXUTGetExitCode();
}
/*------------------------
3D_Shader_DisplacementMapping.cpp -- achieve displacement mapping
(c) Seamanj.2013/9/2
------------------------*/ struct Material
{
float4 ambient;
float4 diffuse;
float4 specular;
float specularPower;
}; struct DirectionalLight
{
float4 ambient;
float4 diffuse;
float4 specular;
float3 directionInWorld;
}; uniform extern float4x4 g_mWorld;
uniform extern float4x4 g_mWorldInv;
uniform extern float4x4 g_mWorldViewProj;
uniform extern float3 g_eyePositionInWorld;
uniform extern DirectionalLight g_structDirectionalLight;
uniform extern Material g_structMaterial; // Two normal maps and displacement maps.
uniform extern texture g_texNormalMap1;
uniform extern texture g_texNormalMap2;
uniform extern texture g_texDisplacementMap1;
uniform extern texture g_texDisplacementMap2; // Texture coordinate offset vectors for scrolling
// normal maps and displacement maps.
uniform extern float2 g_normalOffset1;
uniform extern float2 g_normalOffset2;
uniform extern float2 g_DisplacementOffset1;
uniform extern float2 g_DisplacementOffset2; // User-defined scaling factors to scale the heights
// sampled from the displacement map into a more convenient
// range.
uniform extern float2 g_scaleHeights; // Space between grids in x,z directions in local space
// used for finite differencing.
uniform extern float2 g_delta; // Shouldn't be hardcoded, but ok for demo.
static const float DISPLACEMENTMAP_SIZE = 128.0f;
static const float DISPLACEMENTMAP_DELTA = 1.0f / DISPLACEMENTMAP_SIZE; sampler g_samNormalMap1 = sampler_state
{
Texture = <g_texNormalMap1>;
MinFilter = ANISOTROPIC;
MaxAnisotropy = 12;
MagFilter = LINEAR;
MipFilter = LINEAR;
AddressU = WRAP;
AddressV = WRAP;
}; sampler g_samNormalMap2 = sampler_state
{
Texture = <g_texNormalMap2>;
MinFilter = ANISOTROPIC;
MaxAnisotropy = 12;
MagFilter = LINEAR;
MipFilter = LINEAR;
AddressU = WRAP;
AddressV = WRAP;
}; sampler g_samDisplacementMap1 = sampler_state
{
Texture = <g_texDisplacementMap1>;
MinFilter = POINT;
MagFilter = POINT;
MipFilter = POINT;
AddressU = WRAP;
AddressV = WRAP;
}; sampler g_samDisplacementMap2 = sampler_state
{
Texture = <g_texDisplacementMap2>;
MinFilter = POINT;
MagFilter = POINT;
MipFilter = POINT;
AddressU = WRAP;
AddressV = WRAP;
}; struct OutputVS
{
float4 posInHomogeneous : POSITION0;
float3 toEyeInTangent : TEXCOORD0;
float3 lightDirectionInTangent : TEXCOORD1;
float2 texcoord1 : TEXCOORD2;
float2 texcoord2 : TEXCOORD3;
}; float3 CartesianToSpherical(float3 cartesianCoord,float radius)
{
float gamma,theta;
gamma = cartesianCoord.x / (g_delta.x * 127 / 2) * 3.14159265;
theta = (g_delta.y * 127 / 2 - cartesianCoord.z) / (g_delta.y * 127 / 2) * 3.14159265 / 2; float sinGamma,cosGamma,sinTheta,cosTheta;
sincos(gamma, sinGamma, cosGamma);
sincos(theta, sinTheta, cosTheta);
float3 sphericalCoord;
radius = radius + cartesianCoord.y ;
sphericalCoord.x = radius * sinTheta * cosGamma;
sphericalCoord.z = radius * sinTheta * sinGamma;
sphericalCoord.y = radius * cosTheta;
return sphericalCoord;
} float DoDisplacementMapping(float2 texCoord1, float2 texCoord2)
{
// Transform to texel space
float2 texelPos = DISPLACEMENTMAP_SIZE * texCoord1; // Determine the lerp amounts.
float2 lerps = frac(texelPos); float height1[4];
//由于移位纹理显示青色(即B,G通道都为1,所以应该是R通道存储的是高度
height1[0] = tex2Dlod(g_samDisplacementMap1, float4(texCoord1, 0.0f, 0.0f)).r;
height1[1] = tex2Dlod(g_samDisplacementMap1, float4(texCoord1, 0.0f, 0.0f)+float4(DISPLACEMENTMAP_DELTA, 0.0f, 0.0f, 0.0f)).r;
height1[2] = tex2Dlod(g_samDisplacementMap1, float4(texCoord1, 0.0f, 0.0f)+float4(0.0f, DISPLACEMENTMAP_DELTA, 0.0f, 0.0f)).r;
height1[3] = tex2Dlod(g_samDisplacementMap1, float4(texCoord1, 0.0f, 0.0f)+float4(DISPLACEMENTMAP_DELTA, DISPLACEMENTMAP_DELTA, 0.0f, 0.0f)).r;
//这里取出来的值范围在[0,1]之内
// Filter displacement map:
float h1 = lerp( lerp( height1[0], height1[1], lerps.x ),
lerp( height1[2], height1[3], lerps.x ),
lerps.y ); texelPos = DISPLACEMENTMAP_SIZE * texCoord2;
lerps = frac(texelPos); float height2[4];
height2[0] = tex2Dlod(g_samDisplacementMap2, float4(texCoord2, 0.0f, 0.0f)).r;
height2[1] = tex2Dlod(g_samDisplacementMap2, float4(texCoord2, 0.0f, 0.0f)+float4(DISPLACEMENTMAP_DELTA, 0.0f, 0.0f, 0.0f)).r;
height2[2] = tex2Dlod(g_samDisplacementMap2, float4(texCoord2, 0.0f, 0.0f)+float4(0.0f, DISPLACEMENTMAP_DELTA, 0.0f, 0.0f)).r;
height2[3] = tex2Dlod(g_samDisplacementMap2, float4(texCoord2, 0.0f, 0.0f)+float4(DISPLACEMENTMAP_DELTA, DISPLACEMENTMAP_DELTA, 0.0f, 0.0f)).r; // Filter displacement map:
float h2 = lerp( lerp( height2[0], height2[1], lerps.x ),
lerp( height2[2], height2[3], lerps.x ),
lerps.y ); // Sum and scale the sampled heights.
return g_scaleHeights.x * h1 + g_scaleHeights.y * h2; }
OutputVS myVertexEntry(float3 positionInLocal : POSITION0,
float2 scaledTexCoord : TEXCOORD0,//供法线纹理使用
float2 normalizedTexCoord : TEXCOORD1)//供移位纹理使用
{
// Zero out our output.
OutputVS outVS = (OutputVS)0; // Scroll vertex texture coordinates to animate waves.
float2 DisplacementTexCoord1 = normalizedTexCoord + g_DisplacementOffset1;
float2 DisplacementTexCoord2 = normalizedTexCoord + g_DisplacementOffset2; // Set y-coordinate of water grid vertices based on displacement mapping.
positionInLocal.y = DoDisplacementMapping(DisplacementTexCoord1, DisplacementTexCoord2); // Estimate TBN-basis using finite differencing in local space.
float left = DoDisplacementMapping(DisplacementTexCoord1 + float2(DISPLACEMENTMAP_DELTA, 0.0f),
DisplacementTexCoord2 + float2(0.0f, DISPLACEMENTMAP_DELTA));
float front = DoDisplacementMapping(DisplacementTexCoord1 + float2(DISPLACEMENTMAP_DELTA, 0.0f),
DisplacementTexCoord2 + float2(0.0f, DISPLACEMENTMAP_DELTA)); float3x3 TBN;
TBN[0] = normalize(float3(1.0f, (left - positionInLocal.y)/g_delta.x, 0.0f)); //Tangent
TBN[1] = normalize(float3(0.0f, (front - positionInLocal.y)/g_delta.y, -1.0f));//Binormal
TBN[2] = normalize(cross(TBN[0], TBN[1]));//Normal // Matrix transforms from object space to tangent space.
float3x3 toTangentSpace = transpose(TBN); // Transform eye position to local space.
float3 eyePositionInLocal = mul(float4(g_eyePositionInWorld, 1.0f), g_mWorldInv).xyz; // Transform to-eye vector to tangent space.
float3 toEyeInLocal = eyePositionInLocal - positionInLocal;
outVS.toEyeInTangent = mul(toEyeInLocal, toTangentSpace); // Transform light direction to tangent space.
float3 lightDirectionInLocal = mul(float4(g_structDirectionalLight.directionInWorld, 0.0f), g_mWorldInv).xyz;
outVS.lightDirectionInTangent = mul(lightDirectionInLocal, toTangentSpace); positionInLocal = CartesianToSpherical(positionInLocal, 30.0f);
// Transform to homogeneous clip space. outVS.posInHomogeneous = mul(float4(positionInLocal, 1.0f), g_mWorldViewProj); // Scroll texture coordinates.
outVS.texcoord1 = scaledTexCoord+ g_normalOffset1;
outVS.texcoord2 = scaledTexCoord+ g_normalOffset2; // Done--return the output.
return outVS;
} float4 myPixelEntry(float3 toEyeInTangent : TEXCOORD0,
float3 lightDirectionInTangent : TEXCOORD1,
float2 texcoord1 : TEXCOORD2,
float2 texcoord2 : TEXCOORD3) : COLOR
{
// Interpolated normals can become unnormal--so normalize.
// Note that toEyeW and normalW do not need to be normalized
// because they are just used for a reflection and environment
// map look-up and only direction matters.
toEyeInTangent = normalize(toEyeInTangent);
lightDirectionInTangent = normalize(lightDirectionInTangent); // Light vector is opposite the direction of the light.
float3 toLightInTangent = -lightDirectionInTangent; // Sample normal map.
float3 normalInTangent1 = tex2D(g_samNormalMap1, texcoord1);
float3 normalInTangent2 = tex2D(g_samNormalMap2, texcoord2); // Expand from [0, 1] compressed interval to true [-1, 1] interval.
normalInTangent1 = 2.0f * normalInTangent1 - 1.0f;
normalInTangent2 = 2.0f * normalInTangent2 - 1.0f; // Average the two vectors.
float3 normalInTangent = normalize( 0.5f * ( normalInTangent1 + normalInTangent2)); // Compute the reflection vector.
float3 r = reflect(lightDirectionInTangent, normalInTangent); // Determine how much (if any) specular light makes it into the eye.
float s = pow(max(dot(r, toEyeInTangent), 0.0f), g_structMaterial.specularPower); // Determine the diffuse light intensity that strikes the vertex.
float d = max(dot(toLightInTangent, normalInTangent), 0.0f); // If the diffuse light intensity is low, kill the specular lighting term.
// It doesn't look right to add specular light when the surface receives
// little diffuse light.
if(d <= 0.0f)
s = 0.0f; // Compute the ambient, diffuse and specular terms separatly.
float3 spec = s * ( g_structMaterial.specular * g_structDirectionalLight.specular).rgb;
float3 diffuse = d * (g_structMaterial.diffuse*g_structDirectionalLight.diffuse.rgb);
float3 ambient = g_structMaterial.ambient * g_structDirectionalLight.ambient; float3 final = ambient + diffuse + spec; // Output the color and the alpha.
return float4(final, g_structMaterial.diffuse.a);
} technique myTech
{
pass P0
{
//FillMode = WIREFRAME;
// Specify the vertex and pixel shader associated with this pass.
vertexShader = compile vs_3_0 myVertexEntry();
pixelShader = compile ps_3_0 myPixelEntry(); }
}
好了,收工,睡觉去了,这几天要写个柏林噪声出来!!!!!!!!!!!!
可执行程序以及相关源代码请点击这里下载