13

Good day, I am trying to display a real-time stereo video using nvidia 3DVision and two IP cameras. I am totally new to DirectX, but have tried to work through some tutorials and other questions on this and other sites. For now, I am displaying two static bitmaps for left and right eyes. These will be replaced by bitmaps from my cameras once I have got this part of my program working. This question NV_STEREO_IMAGE_SIGNATURE and DirectX 10/11 (nVidia 3D Vision) has helped me quite a bit, but I am still struggling to get my program working as it should. What I am finding is that my shutter glasses start working as they should, but only the image for the right eye gets displayed, while the left eye remains blank (except for the mouse cursor).

Here is my code for generating the stereo images:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Windows.Forms;
using System.Drawing;
using System.Drawing.Imaging;
using System.IO;

using SlimDX;
using SlimDX.Direct3D11;
using SlimDX.Windows;
using SlimDX.DXGI;

using Device = SlimDX.Direct3D11.Device;            // Make sure we use DX11
using Resource = SlimDX.Direct3D11.Resource;

namespace SlimDxTest2
{
static class Program
{
    private static Device device;               // DirectX11 Device
    private static int Count;                   // Just to make sure things are being updated

    // The NVSTEREO header. 
    static byte[] stereo_data = new byte[] {0x4e, 0x56, 0x33, 0x44,   //NVSTEREO_IMAGE_SIGNATURE         = 0x4433564e; 
    0x00, 0x0F, 0x00, 0x00,                                           //Screen width * 2 = 1920*2 = 3840 = 0x00000F00; 
    0x38, 0x04, 0x00, 0x00,                                           //Screen height = 1080             = 0x00000438; 
    0x20, 0x00, 0x00, 0x00,                                           //dwBPP = 32                       = 0x00000020; 
    0x02, 0x00, 0x00, 0x00};                                          //dwFlags = SIH_SCALE_TO_FIT       = 0x00000002

    [STAThread]
    static void Main()
    {

        Bitmap left_im = new Bitmap("Blue.png");        // Read in Bitmaps
        Bitmap right_im = new Bitmap("Red.png");

        // Device creation 
        var form = new RenderForm("Stereo test") { ClientSize = new Size(1920, 1080) };
        var desc = new SwapChainDescription()
        {
            BufferCount = 1,
            ModeDescription = new ModeDescription(1920, 1080, new Rational(120, 1), Format.R8G8B8A8_UNorm),
            IsWindowed = false, //true,
            OutputHandle = form.Handle,
            SampleDescription = new SampleDescription(1, 0),
            SwapEffect = SwapEffect.Discard,
            Usage = Usage.RenderTargetOutput
        };

        SwapChain swapChain;
        Device.CreateWithSwapChain(DriverType.Hardware, DeviceCreationFlags.Debug, desc, out device, out swapChain);

        RenderTargetView renderTarget;          // create a view of our render target, which is the backbuffer of the swap chain we just created
        using (var resource = Resource.FromSwapChain<Texture2D>(swapChain, 0))
            renderTarget = new RenderTargetView(device, resource);

        var context = device.ImmediateContext;                  // set up a viewport
        var viewport = new Viewport(0.0f, 0.0f, form.ClientSize.Width, form.ClientSize.Height);
        context.OutputMerger.SetTargets(renderTarget);
        context.Rasterizer.SetViewports(viewport);

        // prevent DXGI handling of alt+enter, which doesn't work properly with Winforms
        using (var factory = swapChain.GetParent<Factory>())
            factory.SetWindowAssociation(form.Handle, WindowAssociationFlags.IgnoreAll);

        form.KeyDown += (o, e) =>                   // handle alt+enter ourselves
        {
            if (e.Alt && e.KeyCode == Keys.Enter)
                swapChain.IsFullScreen = !swapChain.IsFullScreen;
        };

        form.KeyDown += (o, e) =>                   // Alt + X -> Exit Program
        {
            if (e.Alt && e.KeyCode == Keys.X)
            {
                form.Close();
            }
        };

        context.ClearRenderTargetView(renderTarget, Color.Green);       // Fill Screen with specified colour

        Texture2DDescription stereoDesc = new Texture2DDescription()
        {
            ArraySize = 1,
            Width = 3840,
            Height = 1081,
            BindFlags = BindFlags.None,
            CpuAccessFlags = CpuAccessFlags.Write,
            Format = SlimDX.DXGI.Format.R8G8B8A8_UNorm,
            OptionFlags = ResourceOptionFlags.None,
            Usage = ResourceUsage.Staging,
            MipLevels = 1,
            SampleDescription = new SampleDescription(1, 0)
        };

        // Main Loop 
        MessagePump.Run(form, () =>
        {
            Texture2D texture_stereo =  Make3D(left_im, right_im);      // Create Texture from two bitmaps in memory
            ResourceRegion stereoSrcBox = new ResourceRegion { Front = 0, Back = 1, Top = 0, Bottom = 1080, Left = 0, Right = 1920 };
            context.CopySubresourceRegion(texture_stereo, 0, stereoSrcBox, renderTarget.Resource, 0, 0, 0, 0);
            texture_stereo.Dispose();

            swapChain.Present(0, PresentFlags.None);
        });

        // Dispose resources 

        swapChain.IsFullScreen = false;     // Required before swapchain dispose
        device.Dispose();
        swapChain.Dispose();
        renderTarget.Dispose();

    }



    static Texture2D Make3D(Bitmap leftBmp, Bitmap rightBmp)
    {
        var context = device.ImmediateContext;
        Bitmap left2 = leftBmp.Clone(new RectangleF(0, 0, leftBmp.Width, leftBmp.Height), PixelFormat.Format32bppArgb);     // Change bmp to 32bit ARGB
        Bitmap right2 = rightBmp.Clone(new RectangleF(0, 0, rightBmp.Width, rightBmp.Height), PixelFormat.Format32bppArgb);

        // Show FrameCount on screen: (To test)
        Graphics left_graph = Graphics.FromImage(left2);
        left_graph.DrawString("Frame: " + Count.ToString(), new System.Drawing.Font("Arial", 16), Brushes.Black, new PointF(100, 100));
        left_graph.Dispose();

        Graphics right_graph = Graphics.FromImage(right2);
        right_graph.DrawString("Frame: " + Count.ToString(), new System.Drawing.Font("Arial", 16), Brushes.Black, new PointF(200, 200));
        right_graph.Dispose();
        Count++;

        Texture2DDescription desc2d = new Texture2DDescription()
        {
            ArraySize = 1,
            Width = 1920,
            Height = 1080,
            BindFlags = BindFlags.None,
            CpuAccessFlags = CpuAccessFlags.Write,
            Format = SlimDX.DXGI.Format.R8G8B8A8_UNorm,
            OptionFlags = ResourceOptionFlags.None,
            Usage = ResourceUsage.Staging,
            MipLevels = 1,
            SampleDescription = new SampleDescription(1, 0)
        };

        Texture2D leftText2 = new Texture2D(device, desc2d);        // Texture2D for each bmp
        Texture2D rightText2 = new Texture2D(device, desc2d);

        Rectangle rect = new Rectangle(0, 0, left2.Width, left2.Height);
        BitmapData leftData = left2.LockBits(rect, ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb);
        IntPtr left_ptr = leftData.Scan0;
        int left_num_bytes = Math.Abs(leftData.Stride) * leftData.Height;
        byte[] left_bytes = new byte[left_num_bytes];
        byte[] left_bytes2 = new byte[left_num_bytes];

        System.Runtime.InteropServices.Marshal.Copy(left_ptr, left_bytes, 0, left_num_bytes);       // Get Byte array from bitmap
        left2.UnlockBits(leftData);
        DataBox box1 = context.MapSubresource(leftText2, 0, MapMode.Write, SlimDX.Direct3D11.MapFlags.None);
        box1.Data.Write(left_bytes, 0, left_bytes.Length);
        context.UnmapSubresource(leftText2, 0);

        BitmapData rightData = right2.LockBits(rect, ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb);
        IntPtr right_ptr = rightData.Scan0;
        int right_num_bytes = Math.Abs(rightData.Stride) * rightData.Height;
        byte[] right_bytes = new byte[right_num_bytes];

        System.Runtime.InteropServices.Marshal.Copy(right_ptr, right_bytes, 0, right_num_bytes);       // Get Byte array from bitmap
        right2.UnlockBits(rightData);
        DataBox box2 = context.MapSubresource(rightText2, 0, MapMode.Write, SlimDX.Direct3D11.MapFlags.None);
        box2.Data.Write(right_bytes, 0, right_bytes.Length);
        context.UnmapSubresource(rightText2, 0);

        Texture2DDescription stereoDesc = new Texture2DDescription()
        {
            ArraySize = 1,
            Width = 3840,
            Height = 1081,
            BindFlags = BindFlags.None,
            CpuAccessFlags = CpuAccessFlags.Write,
            Format = SlimDX.DXGI.Format.R8G8B8A8_UNorm,
            OptionFlags = ResourceOptionFlags.None,
            Usage = ResourceUsage.Staging,
            MipLevels = 1,
            SampleDescription = new SampleDescription(1, 0)
        };
        Texture2D stereoTexture = new Texture2D(device, stereoDesc);    // Texture2D to contain stereo images and Nvidia 3DVision Signature

        // Identify the source texture region to copy (all of it) 
        ResourceRegion stereoSrcBox = new ResourceRegion { Front = 0, Back = 1, Top = 0, Bottom = 1080, Left = 0, Right = 1920 };

        // Copy it to the stereo texture 
        context.CopySubresourceRegion(leftText2, 0, stereoSrcBox, stereoTexture, 0, 0, 0, 0);
        context.CopySubresourceRegion(rightText2, 0, stereoSrcBox, stereoTexture, 0, 1920, 0, 0);   // Offset by 1920 pixels

        // Open the staging texture for reading and go to last row
        DataBox box = context.MapSubresource(stereoTexture, 0, MapMode.Write, SlimDX.Direct3D11.MapFlags.None);
        box.Data.Seek(stereoTexture.Description.Width * (stereoTexture.Description.Height - 1) * 4, System.IO.SeekOrigin.Begin);
        box.Data.Write(stereo_data, 0, stereo_data.Length);            // Write the NVSTEREO header 
        context.UnmapSubresource(stereoTexture, 0);

        left2.Dispose();
        leftText2.Dispose();
        right2.Dispose();
        rightText2.Dispose();
        return stereoTexture;
    } 

}

}

I have tried various methods of copying the Texture2D of the stereo image including signature (3840x1081) to the backbuffer, but none of the methods I have tried display both images... Any help or comments will be much appreciated, Ryan

Community
  • 1
  • 1
Ryan Lucke
  • 131
  • 1
  • 3
  • I have tried going back to Direct3D 9 (so that I can use stretchrect), but now I am having trouble getting the program to run in full screen mode. As soon as I set presentparams.Windowed = false, the program crashes when I create my swapchain. I get the following error: D3DERR_INVALIDCALL (-2005530516). If it helps at all, I am using a Dell XPS17 laptop with built in 3D transmitter... – Ryan Lucke Jul 05 '12 at 14:49
  • Ok, so I have managed to get it working using SlimDX and Direct3D 9. I only create a device using my presentparams, and do not create a swapchain (which was causing my program to crash when trying to start up in fullscreen mode). I thought a swapchain was required when creating a device, but it seems not. For now, I will stick to Direct3D 9 and get the rest of my program working (connecting the two cameras and getting everything synchronised etc.). It will still be nice to get it working in Direct3D11, but that will have to wait. – Ryan Lucke Jul 06 '12 at 08:59
  • In the main loop you have the ResourceRegion Bottom = 1080 and Right = 1920 shouldn't there be right = 1920*2? – Pedro.The.Kid Feb 04 '14 at 10:15

4 Answers4

1

If using DirectX11.1 is an option, there is a much easier way to enable stereoscopic features, without having to rely on nVidia's byte wizardry. Basically, you create a SwapChan1 instead of a regular SwapChain, then it is as simple as setting Stereo to True.

Have a look at this post I made, it shows you how to create a Stereo swapChain. The code is a porting to C# of MS's own stereo sample. Then you'll have two render targets and it is much more simple. Before rendering you have to:

void RenderEye(bool rightEye, ITarget target)
{
    RenderTargetView currentTarget = rightEye ? target.RenderTargetViewRight : target.RenderTargetView;
    context.OutputMerger.SetTargets(target.DepthStencilView, currentTarget);
    [clean color/depth]
    [render scene]
    [repeat for each eye]
}

where ITarget is an interface for a class providing access to the backbuffer, rendertargets, etc. That's it, DirectX will take care of everything. Hope this helps.

TheWanderer
  • 244
  • 2
  • 9
0

Try creating the backbufer with width = 1920 and not 3840. stretch each image to half the size in width and put them side by side.

Horonchik
  • 479
  • 2
  • 11
  • Thanks for the advice. As I mentioned in the comments above, I have got it working with D3D 9. My backbuffer is set with width = 1920, but the problem I have is that there is no equivalent function to StretchRectangle() in D3D 10 and 11. How do you stretch the 3840 pixel wide image down to 1920 pixel wide backbuffer without StretchRectangle()? I tried using the CopySubResourceRegion(), but only the source size can be specified and I don't get it to work... – Ryan Lucke Jul 19 '12 at 11:37
0

I remember seeing this exact same question while searching a couple of days ago on the Nvidia Developer forums. Unfortunately the forums are down due to a recent hacker attack. I remember that the OP on that thread was able to get it working with DX11 and Slimdx using the signature hack. You do not use the stretchRectangle method its was something like createResuroseRegion() or but not that exactly I can't remember. It might be these methods CopyResource() or CopySubresourceRegion() found in this similar thread on stack over flow. Copy Texture to Texture

Community
  • 1
  • 1
Alexander Van Atta
  • 820
  • 10
  • 33
0

Also are you rendering the image continuously or at least a few times? I was doing the same thing in DX9 and had to tell DX to render 3 frames before the driver recognized it as 3D vision. Did your glasses kick on? Is your backbuffer = (width*2), (Height+1) and are you writing the backbuffer like so:

_________________________
|           |            |      
|  img1     |     img2   |
|           |            |
--------------------------
|_______signature________| where this last row = 1 pix tall
Alexander Van Atta
  • 820
  • 10
  • 33
  • Hi. Yes I am rendering continuously. As I mentioned in my comments above I got it working with DX9. I have now also got the two cameras running and it works nicely. The intermediate buffer I work with looks as you describe above, but the backbuffer to which I write this using StretchRectangle() is only 1920x1080, and not double the width and height + 1. I think the nvidia driver detects it immediately, but it takes a while for the IR transmitter to switch on and the glasses to start working. I am not going to bother further at this stage to try getting it to work in DX11. – Ryan Lucke Jul 24 '12 at 08:38