I'm working on an OpenVR project and am in the process of writing a nice OOP binding layer on top of the flat bindings that OpenVR generates. The layer on top is completely platform agnostic and a standalone library. I won't make them public yet, but here's a rundown of what I do to get stuff working:
OpenVR.IsHmdPresent (this isn't actually reliable in my experience)
OpenVR.Init (error code 108 means no HMD connected, so you can still detect this even if the previous method does not work)
for i in (0 .. OpenVR.k_unMaxTrackedDeviceCount) check
CVRSystem.GetControllerRoleForTrackedDeviceIndex to find the left and right controllers.
CVRSystem.GetRecommendedRenderTargetSize to get the width and height of RTs used for left and right eye.
Complete transformation should be
Model * View * Eye^-1 * Projection where
Eye is the result of
Projection is the result of
CVRSystem.GetProjectionMatrix. For non-VR rendering (or to make sure you render anything to the HMD at all) just use the identity matrix for eye and a typical perspective projection. I.e.
var aspect = GraphicsDevice.Viewport.AspectRatio;
projection = Matrix.CreatePerspectiveFieldOfView(FieldOfView, aspect, NearPlane, FarPlane);
Probably a good idea to run without VR first, to make sure the issue is really with the VR setup.
View should include the HMD transform, see below. Obviously this should also be the identity matrix without VR.
Matrices returned by OpenVR are left-handed. Make sure you transpose them when converting to a MonoGame
public static Matrix ToMg(this HmdMatrix34_t mat)
var m = new Matrix(
mat.m0, mat.m4, mat.m8, 0.0f,
mat.m1, mat.m5, mat.m9, 0.0f,
mat.m2, mat.m6, mat.m10, 0.0f,
mat.m3, mat.m7, mat.m11, 1.0f);
public static Matrix ToMg(this HmdMatrix44_t mat)
var m = new Matrix(
mat.m0, mat.m4, mat.m8, mat.m12,
mat.m1, mat.m5, mat.m9, mat.m13,
mat.m2, mat.m6, mat.m10, mat.m14,
mat.m3, mat.m7, mat.m11, mat.m15);
You must call
OpenVR.Compositor.WaitGetPoses every frame. This updates poses for tracked devices like the controllers and HMD. I tried testing my set up before implementing this part and OpenVR did not render anything if I didn't call this method. The index of the HMD is always 0, so the first pose in the returned arrays can be used for the HMD view.
When you're done rendering to your eye RenderTargets like you normally would, but with their respective transformations you'll need to submit them to OpenVR. It's easiest to let OpenVR handle the distortion and simply submit a handle to your RTs. You can use reflection to get the native handle to the
Texture2D. For DesktopGL you can use the following snippet:
var fieldInfo = typeof(Texture2D).GetField("glTexture", System.Reflection.BindingFlags.Instance | System.Reflection.BindingFlags.NonPublic);
var handle = new IntPtr((int) fieldInfo.GetValue(myEyeRenderTarget));
var tex = new Texture_t();
tex.handle = handle;
tex.eType = ETextureType.OpenGL;
tex.eColorSpace = EColorSpace.Auto;
var texBounds = new VRTextureBounds_t();
texBounds.uMin = 0;
texBounds.uMax = 1;
texBounds.vMin = 0;
texBounds.vMax = 1;
OpenVR.Compositor.Submit(eye, ref tex, ref texBounds, EVRSubmitFlags.Submit_Default);
- Just call
Hope that helps If you want to see some of the code, feel free to DM me.