Are the SI-2K and the SI-2K MINI the same product?

No. The SI-2K is an all-in-one full-featured digital cinema camera with embedded SiliconDVR camera control and recording software, and a removable camera head. The SI-2K MINI is only the camera-head portion of the SI-2K, and must be tethered over gigabit ethernet to a controlling workstation or laptop system running SiliconDVR.

 

What is the difference between the SI-2K MINI and SiliconDVR?

The SI-2K MINI is the hardware imaging component of the Silicon Imaging camera system, while SiliconDVR is the software component that drives and controls the camera hardware. SiliconDVR is designed to be both an embedded in-the-field recorder and also suitable for studio or "video-village" style desktop environments.

 

Who makes the hard-drive carrier for the SI-2K, and what drive types does it support?

Our drive carrier fits in a standard 3.5" drive enclosure defined by the ATX standard. It is currently a device made by CRU-dataport. CRU has been making removable drive enclosures not only for the commercial sector, but also for the government, military, police, and other high-demand physical industrial environments. The drive carrier itself can take any 2.5" SATA drive, including off-the-shelf HDD's from Seagate, Hitachi, Toshiba, etc., as well as SSD's (solid state disks) from SanDisk, Samsung, and PNY.

 

If the SI-2K MINI captures pixel data with a 12-bit A/D converter, but only records at 10-bit, isn't dynamic range lost?

No. While we are not using a logarithmic file format for saving the information off the camera head, the gamma-correction LUT we are using was designed to visually maintain the entire dynamic range that the camera head can deliver in a 12-to-10-bit conversion workflow. Also because we are capturing a 10-bit file using a high-dynamic range/low-noise sensor, there is much more room for under-exposure than other 8-bit tape-based formats and CCD-based HD cameras, so the user can maintain highlight detail by under-exposing the camera without damaging any information in the shadows. The noise floor is the only limit for dynamic range. We are not doing what is typical of the Rec. ITU-709 transfer curve that clips the highlights and throws away the over-exposure headroom of the sensor, nor are we relying on dynamic knee controls to automatically squeeze as much visual dynamic range into the recorded image. The LUT, along with its associated fixed knee, is designed to transfer the entire dynamic range that the sensor captured to the 10-bit CineForm RAW™ file format.

 

What is the dynamic range and sensitivity of the SI-2K MINI?

The dynamic range has been tested by an independent lab and found to be 10.5 stops. The native ISO rating of the 10-bit log RAW data is ISO 250 at 3200K.

Other sensitivity rates at 3200K are as follows: +3db -- ISO 320, +6db -- ISO 500, +9db -- ISO 640, and +12db -- ISO 1000. Sensitivity does not increase linearly at the higher ISO settings due to the fact that we are implementing gain in the analog domain, not the digital, therefore there is a certain amount of non-linearity to the sensitivity curve as the analog gains are increased. The advantage of analog gains is that they are visually "cleaner" for a given ISO than digital gains, as the sensor is able to maximize the signal swing digitized by the A/D converter rather than stretching bits already digitized, which can begin to incur quantization noise, especially at the higher gains due to the fact that once digitization of the signal takes place, the headroom amount if fixed by the amount of bits digitized and the SNR of the signal.

 

What is the "native" white-balance in Kelvins of the SI-2K sensor?

The CMOS sensor in the SI-2K is approximately "daylight" balanced for 5600K (give or take a couple hundred degrees Kelvin). This is because the sensor is more sensitive to red (specifically infra-red), than it is to blue light. So for a given scene, by exposing the sensor to more blue light and less red light, the response of the blue channel will be evened out with the response of the red channel.

For instance, in order to achieve native "white-balance", one would want the color-temp of the light exposed to the sensor to produce an equal response in both the red, green, and blue channels. If one exposes a very warm source to the sensor, the red response will be very high, and the blue response will be very low as the sensor is more sensitive to red light . . . and the resulting picture of course will not be white-balanced unless digital gains are applied to the blue-channel. As one moves the light source from a warmer to cooler spectrum, the blue light being exposed to the sensor is increased, creating an associated greater amount of blue response compared to red response for that give light-source. After a certain threshold, the red response will be the same as the blue response, and a "native" white-balance point will be achieved that does not require a digital gain in any of the channels to create an even response in the three color channels. That being said, because the sensor is more sensitive to red light, a higher ISO for the sensor will be achieved in tungsten light. Of course, along with the higher native ISO under tungsten, the blue channel, because it will have a lower response, will need to be gained-up in order to match the response of the red and green channels for proper white-balance. As a result, the blue channel will be noisier.

While daylight is the native white-balance of the sensor (and will give a cleaner blue channel), because the sensor is being exposed to a color temperature that it is not the most sensitive to, the ISO will be lower (daylight does not have as much red light which the sensor has a higher response to). The SI-2K has an approximately 1/2 f-stop loss in sensitivity going from tungsten to daylight. Based on the native ISO of the SI-2K being 250 for tungsten, that means an approximate ISO of 160 for daylight balanced light-sources.

When shooting green-screen or blue-screen footage, in order to prevent higher digital gains (that in-turn produce higher noise) in the blue and green channels in order to match the response of the red channel, it is suggested to shoot with daylight balanced light-sources. Also daylight balanced light-sources will extend the "useable" dynamic range of the image around 1/3 to 1/2 f-stop (because the red-channel will not clip first).

Shooting with color-balancing filters in front of the camera lens will also achieve the same effect when under tungsten lighting, but doing so will also interfere with digital white-balance in-post. Color-balancing filters basically "bake-in" the white-balance setting, defeating the purpose of the white-balance metadata that is saved in each recorded file in order to be further processed or adjusted in post.

 

Why does the perceived sensitivity of the SI-2K change with different white-balance settings?

The sensitivity of the SI-2K differs because the white-balance is accomplished through a digital, not analog gain on the RAW data. This means that the sensor's RAW data is being adjusted digitally for the optimum white-balance, rather than before the A/D converter. This gives us the flexibility to adjust the white-balance in post because the RAW data itself from the camera does not have a non-reversible white-balance "baked" into the image. The applied digital gains though will affect the perceived luminance sensitivity of the camera due to the fact that luminance is a weighted sum of red, green, and blue, so as the different channels are gained up and down to achieve an optimum white-balance, there will be a different luminance value for a given desired middle grey subject. The SI-2K is most sensitive to red light, so the perceived sensitivity of a middle grey subject will be higher under tungsten lighting than daylight. The important thing to note is that since the white-balance is metadata, the actual exposure ISO of the RAW data is the same no matter what the white-balance setting, but it's in the conversion to a luminance value based on a given white-balance that one will see the actual exposure ISO values fluctuate for a grey reference. For example, if one were to shoot a given subject, and then evaluate a middle grey card in that subject, they will find as they digitally adjust the white-balance through a range of kelvin, because of the weighting on the rgb->luma conversion, the exposure on that grey card will fluctuate slightly (approximately 1/3 to 1/2 of a f-stop) with different white-balance settings.

 

Color balancing in bright sunlight

As said above, there is a high sensitivity to red in the sensor. In bright sunlight, with a high red component, the IR cut filter in the camera can be overwhelmed, allowing IR to contaminate the red pixels causing a color imbalance. Rather than increase the IR cut within the camera at the cost of overall dynamic range, it is suggested that an external IR cut filter be used in bright sunlight.

 

What is SET BLACK, and how often should I do it?

SET BLACK calibrates the black level of the sensor, performing a correction of any fixed pattern noise and column-to-column variations created from inconsistencies in the analog-to-digital conversion process, as well as removing any hot or deviant pixels. Fixed pattern noise on a CMOS sensor looks like a fixed "screen-door" effect, or like fixed vertical lines going down the image, and is typically visible in darker or gained up portions of the image. All CMOS sensors have this attribute, but because this fixed pattern noise is a black pure offset, it can easily be removed through a subtraction process which the SET BLACK operations performs. Furthermore, Altasens CMOS sensor designs exhibit extremely low amounts of fixed pattern noise, further enabling the SI-2K to maintain clean blacks after a calibration operation.

After a SET BLACK operation, SiliconDVR saves the black frame for that resolution and gain on the disk so that it can be recalled the next time you boot the program or switch modes. This prevents the user from having to repeatedly cover the lens and SET BLACK. It also saves a listing of deviant pixels that are masked out, preventing them from being embedded in the recorded RAW files.

The frequency of having to SET BLACK will depend on the shooting environment and the subject matter. Typically after approximately five minutes from a cold boot, the camera will reach a temperature stabilization point where any further deviation will be insignificant compared to that which occurred in the first five minutes. Another black recalibration may not be necessary for hours thereafter, especially if one is shooting complex scenery or brighter scenery that is not against dimly lit flat white subject matter (where fixed pattern noise, if it's apparent will typically be the most visible). If one is shooting in hot environment, or is changing the environment rapidly, black recalibrations may be necessary. Also for dark scenes where the shadows are gained up higher, black recalibration may be required more often to make sure that there are no inconsistencies in the blacks with fixed pattern noise that gain may result from temperature variations within the camera as it runs.

 

How do 3D LUT's affect the visual dynamic range of the camera?

The 3D LUT technology from IRIDAS using their SpeedGrade software allows the cinematographer to map the greater than 10-fstops of the SI-2K MINI in any imaginable way possible. The full floating-point rendering pipeline of the CineForm RAW™ codec means that no pixel information is ever lost, nor does clipping ever occur. Values over white and below the black clip of the 3D LUT can be described an manipulated, even in post as long as the sensor itself is never saturated. For instance, a DP can create a very contrasty bleach-bypass style look using Speedgrade, and then apply that 3D LUT to the camera footage. The camera will now take on the "look" of that 3D LUT. Because the LUT is simply metadata, values in the high-contrast area are still accessible in post. While values may have seemed to visually clip in the LUT'ed representation of the raw data, over-white and values below the black clip are still present up the full dynamic range of the camera. The colorist can then extract detail from seemingly blow-out highlights because the highlights are not truly "blow-out" . . . the underlying full dynamic range of the RAW sensor data always remains in-tact and manipulatable due to the non-destructive nature of the SI/Iridas/CineForm color-managed pipeline.

 

What is the Interchangeable Mount System (IMS)?

The interchangeable lens mount featured on the SI-2K and SI-2K MINI camera system is a pre-lens-mount quick-lock bayonet system that allows the end-user to quickly change mounts between PL, B4, C, F, and a number of other lens mounts for 16mm, video, and 35mm lenses. It uses the standard Arri shimming system, so the mount can be calibrated at any qualified location for camera service. Because the lens mount is pre-calibrated at the factory (or by a qualified camera repair technician), removing and exchanging lens mounts means that there is no shift in the back-focus and necessity for re-collimation when exchanging lens mounts. Precise lens centering and other attributes of a strong lens mount are all maintained as the user seamlessly exchanges mounts to suit their workflow or equipment needs.

 

Do you support the B4-mount (Sony mount) lenses?

Yes, the 2K-B4™ from P+S Technik allow you to mount B4-mount lenses without the optical aberrations associated with placing lenses designed for 3-chip cameras on single-chip designs. B4 lenses are designed for a 2/3" sensor - the size of the SI-2K in 1080p mode. If the camera is used in 2K mode, the individual lenses must be checked for vignetting.

 

Are there any C-mount lenses which match the quality of the traditional PL mount optics?

Yes. Fujinon has a series of low-cost C-mount lenses rated for 2/3 sensors up to 5 megapixels that are a good value for the price. Linos has an exceptional set of C-mount lenses that are available with gearing. Schneider lenses have performed very well. Do not use CCTV lenses - these are made for very low resolutions and have extreme chromatic aberrations. Some older cinema lenses may be used but should be fully tested.

 

Is the SI-2K MINI compatible with all my current film and HD camera optics and accessories?

Yes. The SI-2K MINI supports a removable lens mount with support for a variety of PL-mount lenses for Super16 and 16mm film cameras. Other adapters for additional lens mounts can be mounted to the 28mm lens adapter ring or the base c-mount for alternative lens choices.

The SI-2K MINI is designed for the BP series of Arri baseplates, as well as containing an internal mount for light-weight 15mm rods. Additional options include a Sony V-mount quick-release base plate for mounting to common Sony/ENG accessories and tripod heads.

 

How far away can I place the remote camera head from the controlling computer platform?

The remote head can be up to 30m from the camera body using copper gigabit ethernet connections. With the addition of a fiber link (such as one from Gefen), distances can be extended to over 1Km.

 

Does Silicon Imaging plan on making an S35 size sensor camera?

Silicon Imaging has been developing high-speed and large format cameras beyond 2/3" chips for several years. We are directly open to customer feedback, and if the our customers demand a larger format camera with its associated higher entry price point, we can do it. The cost of a camera is usually proportionate to the size of the physical pixel array. The bigger the size the fewer parts that can be made on a wafer, and the higher the resulting cost. We designed the SI-2K MINI to satisfy the demands of the bulk of the current market's cost-to-performance needs. There are always those in the market who can afford significant investments for small incremental performance in specific areas. We feel the innovative feature-set of the SI-2K MINI will provide a satisfactory cost-performance ratio with the added flexibility of a software support architecture that can grow as the market's needs change.

 

How is the SI-2K MINI significantly different than products currently on the market?

Currently the HD cameras available on the market, in our price range, are limited to highly compressed 8-bit codecs with visual compression artifacts, must have relatively expensive editing VTR's to read their tapes, and in the case of HDV, record compressed 16-bit audio. Other cameras that bypass the tape route and record direct-to-disk either use very expensive drive cartridges or solid state media that do not compete with our direct-to-disk approach that utilizes any modern 2.5 off-the-shelf Serial ATA drive. We at Silicon Imaging feel that these current compromises by other camera manufactures with their 8-bit DCT codecs can't meet the requirements and flexibility of high-end film-making in the same manner that the 10-bit CineForm RAW™ format can.

 

I've noticed the SI-2K MINI is a single-sensor design. Does this mean that it's using a Bayer mosaic to generate color? If so, does that mean that there's significant resolution loss over 3-Chip prism designs?

There are a number of advantages to the single-sensor design over the traditional 3-chip design. Single-sensor designs require much simpler lens designs because they do not need to overcome the artifacts induced by the prism of a 3-chip camera, resulting in cheaper lenses. We have chosen for our 2/3" sensor to use the PL-mount that is the current standard among film-camera designs

Unlike other single-sensor cameras that rely on FPGA's for real-time demosaic of the Bayer image, our software-based approach and custom Bayer codec demosaic the image with a complicated, iterative non-linear algorithm, resulting in a very accurate, artifact free image that maintains as much of the original resolution of the camera as possible.

 

How can I approximate the look of 35mm depth-of-field if I'm using the SI-2K and it only has a 2/3" sensor?

Because the SI-2K is a single sensor design, wider-aperture S16mm film lenses can be used such as the Ziess Superspeeds, which can go to a maximum aperture of f1.2. This is in comparison to the widest aperture 3-CCD primes, which are limited to an f-stop of f1.5.

In the comparison below, we have taken a 20mm Zeiss Superspeed S16 lens set at a f-stop of f1.4, and compared it to two common 35mm formats. The subject is 3 meters away from the focal plane. The results, using the depth-of-field calculator on the Panavison New Zealand website are as follows:

  • 2/3" Sensor, 20mm lens, f1.4 (which is possible using Zeiss Superspeeds) - Depth-of-field of 1.39m

  • Panavision Std 35mm HDTV 16:9 TV Trans 0.825x0.464" (CoC=0.001"), f5.6, 45mm lens - Depth-of-field of 1.31m

  • Arri Std 35mm HDTV 16:9 TV Trans 21x11.8mm (CoC=0.025mm), f5.6, 43mm lens - Depth-of-field of 1.43m

The surprising conclusion from these calculations shows that the SI-2Ks depth-of-field, when using this large aperture Ziess S16mm lens, can in fact be the equivalent of a given 35mm format's depth-of-field when shooting at an f-stop of f5.6 in 35mm (for the same FOV). Also, if one were to shoot at f1.2 (which Zeiss Superspeeds can open up to at their wides aperture setting), the depth-of-field on the 2/3" sensor would be equivalent to a f4-f5.6 split in 35mm, since f1.2 is another half-stop wider than f1.4. This does not mean that a 2/3" sensor will always match 35mm film or a 35mm-sized sensor in DOF, but it does prove an advantage to using S16mm prime lenses on the SI-2K, where ground-glass converters and other "tricks" are not necessarily needed if one's aim is to get shallow "35mm-like" DOF. A similar effect to the shallow DOF of a 35mm camera can be achieved by placing Superspeed S16mm optics on the SI-2K and opening them up to their widest apertures.

 

When I select 23.976fps mode, why does the shutter default to 1/47th of a second? Is this a bug?

No, this behavior is not a bug. The shutter in the SI-2K defaults to a 180-degree shutter when you load a new mode. The 1/47th of a second shutter setting is a truncated number-the shutter is actually at 1/47.952 of a second, which is half the frame-time of a 23.976fps time-base. For simplicity, the shutter is displayed as 1/47th of a second. You should not manually set the shutter back to 1/48th of a second if you are in 23.976fps mode and you want the shutter to be a full 180-degrees. Setting the shutter to 1/48th of a second will literally set the shutter to that time-period, and while for most shooting scenarios 1/47.952 vs. 1/48th is not that big of a difference, the distinction should be noted, and the user should not be alarmed.

 

How do you convert from a film shutter-based degree/angle nomenclature to a division of one second nomenclature?

To convert from a shutter based on a time-based method (such as 1/Xth of a second) to a shutter based on a degree angle correlating to the spinning reflex mirror of a film camera, use this equation:

    1 / [ ( 360 degrees / Target angle ) * Target Frame Rate ] = Time based shutter (in 1/Xth of a second format)

So for instance, a 180 degree shutter at 24fps will equate to:

    1 / [ ( 360 degrees / 180 degrees ) * 24 ] = 1/48th of a second

A 144 degree shutter angle at 24fps will equate to:

    1 / [ ( 360 degrees / 144 degrees ) * 24 ] = 1/60th of a second

 

Why are some slower shutter modes not available at higher frame-rates?

The shutter modes are not available because they are slower than a 180 degree shutter would be, or slower than half the frame-rate in a time-based shutter notation. This is done to reduce rolling shutter artifacts.

 

Can one increase the sharpness of the camera by increasing the shutter speed?

No. This will not increase the sharpness of the camera, although for moving subjects, it will decrease the amount of motion blur, and in the case of trying to capture still frames, or fast moving objects, less motion blur will lead to "sharper" object rendition. One thing to keep in mind though is that with a higher shutter speed and extremely fast moving objects, because of the lack of motion blur, rolling shutter "skew" will be more apparent than if the shutter was slower allowing the skew to be averaged out by motion blur.

 

What is the syncro-scan shutter mode for?

This mode is to avoid rolling bars on television screens or computer monitors that are not in-sync with the electronic shutter of the camera. It can also be used to control HMI and fluorescent flicker, or flicker on lights in 50/60Hz countries if you are not using a "safe" frame-rate for those countries.

 

What does the "Slow-Shutter" mode do?

This mode allows integration over many frames, giving you a blurring effect (such as light-streaks on cars, etc.). It can also serve as a means to get better exposure during low-light or night-time scenes without increasing the gain.

 

What is the difference between fan mode 1 & 2?

Fan mode 1 varies the fan-speed based on the processor temperature. So at low temps, the fans run slower (and are quieter), and when the temps are hot, they run faster. This mode is good to use when outside or in hot environments. The processor has a protection mechanism that down-clocks the processor if it goes over 100°C. Also we disable recording when you hit 100°C. So there are no worries about frying the processor (there is also a temp indicator in the status bar that lets you know the temp of the processor). Fan mode 2 keeps the fans running full-blast when not recording, and quiets them when recording. In fan mode 1, when it's hot, the camera does not get quieter when you're in hot temps, but if you are in a sound-critical environment, this can pose a big problem. Fan mode 2 will keep the fans quiet no matter what, although again, recording will stop at 100°C. But once recording stops, the fans kick in at full-speed to keep the camera as cool as possible before the next recording.

 

What can I expect for battery life?

Battery life depends on what accessories you are running from the battery and how much recording you do since the CPU draws more current during recording. The power consumption of the camera without viewfinder is about 45W in preview, 60W during record. Viewfinders and accessories might add another 20W to these numbers. An Anton-Bauer Hytron 140 has a 140WHr (140W for one hour) capacity. This means that a camera with viewfinder and accessories (60W+20W) should last at least an hour and half of continuous recording, more for typical shooting.

 

What stabilizer was used on Slumdog Millionaire?

It was a Kenyon gyro stabilizer:

http://www.ken-lab.com/stabilizers.html

 

 

Can the SI-2K MINI and the Wafian HR-1 work together?

Yes, the Wafian HR-1 can run SiliconDVR, essentially using the Wafian like any other controlling PC platform. This the workflow that Atomic VFX used for their feature film "Spoon". Additionally, the Wafian can ingest from a variety of other YUV-based analog and digital camera sources, and should be strongly considered if one is doing a multi-camera shoot. The Wafian can also serve as a centralized NAS hub for CineForm RAW™ recording, with files from the SI-2K MINI and SiliconDVR-based recording PC being streamed over gigabit ethernet to the Wafian in the background while recording of other devices takes place in the foreground. Using the Wafian as a centralized NAS, CineForm RAW™ and YUV HD files can then be seamlessly edited together on a single Premiere Pro timeline without any cross-rendering needed for real-time performance.

 

Using the eBUS drivers, my gig-e camera takes forever to connect, or when I launch SiliconDVR right after booting Windows, I get a connection error. What's wrong?

Windows has not assigned a valid internal IP address to the NIC that the gig-e camera is using because it's waiting for a certain time-out period with the HDCP protocol. The solution to this issue is to turn off DHCP for that NIC, although this may limit the functionality of that NIC when plugging into other networks.

 

What does it mean to configure my NIC for 8K packets and reboot?

SiliconDVR, because of the tremendous amount of bandwidth that it is passing over gigabit ethernet connection between the camera and the computer, requires that the end-user enable a mode called "Jumbo Packets", which is typically set in the device driver of the NIC. Another term is typically to name the size of the packet, such as 9K packets. When SiliconDVR boots, it does a detection of the camera, and sends an 8K packet to the camera to see if the connection is made successfully. If SiliconDVR detects the camera, but cannot transmit packets to the camera because the NIC is set improperly, it will create this error message to warn the end-user that their NIC is not configured correctly for camera use.

 

What are the suggested network cards for using SiliconDVR?

In desktop computers with PCI or PCIe slots, we recommend Intel Pro1000 adapter cards. These can be purchased from a number of retailers, and are relatively low cost ($30-$50). If an Intel Pro1000 card is not available, the next choice is a NIC that utilizes the Marvell Yukon controller. This controller is typically found in many on-board gigabit ethernet implementations.

For notebook computers, the same is true. If your notebook computer's on-board NIC does not support 9K packets (jumbo packets), you will need to purchase an external ExpressCard adapter with a Marvell Yukon-based NIC. These adapters can be purchased from companies such as Addonics, Abocom, Linksys, and Belkin, and they will fit in the ExpressCard slot of your notebook computer. Once installed, configure the Marvell Yukon NIC using the Windows device manager for 9K packets.

Configuring an Intel Pro1000 NIC for 9K packets is not necessary since the Intel High Performance driver over-rides the normal ethernet driver for this card and auto-configures the card for proper operation.

 

The camera is running really slow, like at 2-3fps, and the timecode in free-run mode is incrementing really slow as well. What is wrong?

Make sure that the link speed of the network is running at full gigabit speeds. We have seen in the past where energy savers or faulty cabling will reduce the speed of the network to 100BASE-T rather than gigabit speeds.

Another item to make sure about is that a previous user did not set the camera to a slow frame-rate or timelapse mode, as the camera will default to the last used mode when rebooting.

 

Can I use a Apple MacBook Pro to record from the SI-2K Mini?

Yes. You will need to boot the MacBook Pro into Windows XP 32-bit using Bootcamp. We recommend users either have a 2.33Ghz or 2.4Ghz model with at least 2GB of RAM in order to run SiliconDVR. Also be sure to configure the internal NIC for 9K packets, which is accomplished in the Windows device manager for the NIC. Recent testing has shown that the MacBook Pro has a tendency to overheat in high ambient environments and with long duration recordings. It is advisable to test the MacBook configuration in the anticipated environment prior to being used on set.

 

Can I use Vista?

Currently SiliconDVR is supported under Windows Vista 32-bit only and Windows XP 32-bit.

 

I own a Core 2 Duo 2.16GHz notebook. Can I use it?

OOnly for 1080/24P/25P recording and below, as this processor speed will not be capable of keeping up with the recording of 2K projects. Use of slower processors can also limit your monitoring options.

 

I have an ATI video card and the SiliconDVR interface is non-responsive or giving me odd refresh errors. What is the problem?

Unfortunately at this time Silicon Imaging has only qualified the use of Nvidia and embedded Intel graphics chipsets (i945GM or later), with a preference towards Nvidia because of the greater performance of that platform. Should driver and other incompatibility issues arise with ATI graphics cards, we will be unable to support those until we have qualified an ATI card for SilconDVR.

 

How do I work with and process the 12-bit uncompressed footage in the .SIV format?

There are two paths that can be taken: The first and most efficient is to use the IRIDAS family of products such as FrameCycler or SpeedGrade XR/DI to natively import and convert the SIV file format to any other necessary file format needed such as DPX files. The second approach is to convert the SIV files to .DNG format, which is Adobe's open camera raw format. This can be done using the "Convert to DNG" button in the player interface. Once in .DNG file sequences, the 12-bit RAW data from the SI-2K can be imported and converted to a number of different file formats either using Adobe Camera RAW, After Effects, or open-source solutions like Dcraw. IRIDAS .look information is not held in the DNG format since this metadata information is currently managed by the CineForm decoder, so any "looks" will be lost, but white-balance, and color information will be all changeable in post and is not baked into the image since DNG supports this type of metadata.

Because of the superior picture quality of CineForm RAW, uncompressed .SIV files should only be necessary for the most demanding imaging applications where bit-for-bit copies of sensor image data is required. While some may think that green-screen or blue-screen keying fits this application, we have actually found to our surprise that in many cases CineForm RAW will produce a cleaner key than 12-bit uncompressed, although this depends on the noise level of the key signal (noisier keys may be better delt with using 12-bit uncompressed due to the smoothing characteristics of wavelets where fine detail could get lost in the noise, but this has not been tested in a real-world environment).

Can I use other peripherals on my SI-2K camera?

The answer is a qualified 'yes'. We run embedded XP in the SI-2K camera. It is possible to load your own drivers and calibration applications. Before doing this, it is suggested that you contact SI to get a copy of the system reflash tool that takes a snapshot of your camera firmware and saves it.

Normally the system drive is write protected to make the camera more stable. There is a tool in the SI folder on the desktop to disable this protection (Disable FBWF and Reboot).

Once the camera is rebooted, you can load new applications from a USB stick or USB drive. A similar tool in the SI folder called Enable FBWF and Reboot will then lock the system drive. We have had customers add wireless mice, other monitors, USB button sticks and much more. Since we cannot test every device, we cannot help you if there are problems, other than reinstalling the basic camera software. Please do not attempt this if you are not familiar with XP or have deadlines in the near future.

 

 

How accurate is the on-camera 8" LCD/Touchscreen?

The on-camera LCD is a high-resolution display meant for preview and focusing purposes. It has also been enhanced to provide readability in sunlight situations. It is better than a typical camera viewfinder, but it is not meant to be a replacement for an accurate viewing monitor to make critical judgments in color and contrast ratios. We have also provided a 2-4x zoom focus aid, spot-meter/4x loupe, edge enhancer, and a 6-stage false-color zebra pattern to help the user judge the accuracy of the image they are recording, even if they don't have a critical viewing monitor available. In addition, separate view LUT's can be created to compensate for differences between the touchscreen monitor and the main outboard monitor, although the touchscreen is only an 18-bit device, and therefore cannot show all the possible colors of a true 24-bit device.

 

Is there a native resolution for the VGA & HDMI outputs on the SI-2K?

We don't recommend going over 1280x720 on either monitor for the SI-2K. The EVF's native resolution is 800x600, as well as the daylight-readable touchscreen. The HDMI can go anywhere from 800x600 to 1280x720 . . . anything higher, and the performance of the system will be adversely affected. That being said, laptop and workstation systems with fast dedicated GPU's don't have this limitation, and can run SiliconDVR and a second monitor at any resolution that the GPU supports.

 

Can I use a larger LCD monitor on set for live previewing?

Yes. SiliconDVR supports independent dual monitor support for full-screen viewing on an out-board monitor independent of the main interface. This way a video village feed for the director can be accomplished using a full-resolution HD display (up to 2K resolution), while the camera operator can concentrate on the main interface without interruption.

For dual-independent display activation, you should set your monitor for "dual-view", or whatever monitoring method makes each monitor it's own independent display. This should be done in the video card hardware driver, not the Windows XP desktop settings, such as using the "extended my desktop onto second monitor" setting.

Of course either of the two video outputs (VGA and HDMI) can be fed through an active video splitter. Generally, the VGA includes all of the viewfinder information while the HDMI is a display only output.

 

Can I preview on a HD-SDI monitor? Will I get better detail?

Yes. Silicon Imaging is presently working to qualify various HDMI to HD-SDI adapters for use with the SI-2K. The Doremi DSDI-20 has been qualified and a separate FAQ will cover its setup. HD-SDI monitors will not support the full resolution of the 2K mode. We also don't recommend going over 1280x720 external monitoring resolution for the SI-2K - because anything higher will affect the performance of the SI-2K.

 

What is the difference between "Quadlet", "Bilinear", and "Hexlet" display modes?

"Quadlet" display mode is a low-cpu usage display mode that quickly demosaics the image using every 2x2 square of pixels to create one RGB pixel. As a result, quadlet mode actually only had 1/4 the resolution of the original image. For instance, a 1920x1080 input RAW data source will demosaic to 960x540. SiliconDVR will automatically scale this resolution in the display for the output resolution desired, but realize that even though you may be outputting 1920x1080 on the external monitor, the actual resolution of the image is only 960x540.

"Bilinear" is a cpu-intensive demosaic algorithm that interpolates the RAW image data and outputs a 1:1 resolution image from the input source data. 2K or 1920x1080 RAW input data outputs as 2K or 1920x1080 RGB data. As a result, bilinear interpolation is adequate for full-resolution monitoring when per-pixel image accuracy is desired on the video-village viewing monitor, or broadcast, large-venue display, or tape-backup applications.

"Hexlet" is a lower-quality display mode than "Quadlet", and displays a 1/6th resolution image (so i.e., 683x384 for a 2K image). It should be used during recording to save CPU and memory bandwidth, especially on the SI-2K with it's embedded graphics subsystem. On the viewfinder , and touchscreen options, the Hexlet mode actually is very close to pixel-for-pixel on those display devices based on the size of the preview area in the main GUI. The Quadlet and Bilinear settings will always look sharper on these displays due to supersampling, not necessarily because of the Hexlet image being too low a resolution for the display device itself.

 

How do I connect both a viewfinder and LCD to my computer?

An active splitter (either DVI or VGA depending on your display devices) will be required since SiliconDVR only supports two independent monitoring systems. Our 8.4" touchscreen viewfinder has an active splitter built in, enabling the use of the EVF with the viewfinder with a minimum of external cabling.

Another option if you video card has a DVI-I output is to split the analog and the digital connection, and send one signal to the LCD, and the other signal to another out-board monitor.

 

Which status display settings in the current 1.1 version of SiliconDVR's player reflect the actual active clip being played back?

The current status indicators that reflect the settings of the actual clip being played back in SiliconDVR version 1.1 are the clip's name, and the timecode for the clip. Other indicators like the compression quality, frame-rate and resolution, gain, shutter, etc. are not indicative of the actual settings or metadata recorded with the QuickTime, AVI, or SIV clip.

 

Why can't I overlay other Windows applications on top of SiliconDVR?

SiliconDVR uses a type of DirectX mode called "Full Screen" in order for the V-sync of the monitor to be precisely in-sync with the graphics card as it refreshes the display, preventing artifacts such as tearing from appearing on the screen, especially during pans and other motion. As a result, Windows is unable to mix an application that has full control of the video display buffer and a normal Windows GDI-based application. Therefore other applications cannot overlay SiliconDVR.

 

What is the optimum image buffer setting?

The optimum image buffer setting for 2GB of RAM and a 2Kx1152 resolution is 192 buffers.

Typically we have found that if you are using a shared memory architecture (such as with the GMA950 from Intel), that 192 buffers works better due to certain amounts of memory overhead that the shared architecture creates, where-as up to 220 buffers works well if a dedicated GPU is being used. Also take into account that there are other internal buffering systems and additional memory required for background and OS operations to take place apart from the image buffer. So the architecture of SiliconDVR and Windows XP does not allow 100% of the memory in a system to be allocated to image buffers. At our largest frame size of 2048x1152, each buffer is equal to 4,718,592 bytes or 4.6MB.

 

Is there a Zebra pattern function?

Yes, but it's actually been "improved" a bit from what one is typically used to from black-and-white viewfinders. We have false-color meter display. What this does is segment the scene into a gradient of six zones, from the deepest shadows (which are represented by a dark blue color), to the brightest highlights (which are represented by solid red). One could think of this sort of like Ansel Adam's zone-system in the way the luminance scale is segmented. As you get a higher luminance percentage towards clip, the values get "warmer", i.e., blue, green, middle grey, yellow, orange, and finally red for near-clip and clipping. So unlike a simple "zebra" pattern, or the dual-zebra patterns that are available in some higher-end prosumer and broadcast cameras, our false color meter display can give the viewer a very quick visualization of the entire luminance distribution of the scene they are shooting not just areas that represent a sliver of the exposure scale like zebras do.

 

Does the camera output SMPTE color bars?

Yes, these are available under the "color-bars" button in the Utility Menu.

 

Doremi DSDI-20 DVI to HD-SDI set up

The Doremi Box set comes with a remote, BNC cable for the HDSDI Monitor, HDMI-to-DVI Cable and a power adapter.
Important: If you do not have the remote for the Doremi you will not be able to make any setting changes, because the box itself has no buttons or switches.

Connect the BNC cable between Doremi and Monitor, and then power them up both. Immediately hit the “Reset” button on the top right corner of the remote and then the "1080P" button. This will provide a 1920 × 1080 60Hz image. Now connect the Doremi HDMI-to-DVI cable to the SI-2K and power cycle the SI-2K.

Click here for more details about the Dormi converter box

 

 

Is the CineForm RAW™ format an open standard?

No. CineForm RAW™ is a proprietary "visually lossless" compression codec based on wavelet transforms instead of older, inefficient DCT technology. Additionally it supports deep-pixel bit depths and variable bit rate encoding, for maximizing the information contained in a given scene for a given data-rate. While the encoder must be licensed, the decoder is freely available here, and can be used by anyone to view the content recorded with the SI-2K. The codec is VFW compatible, and can be imported and played back in any VFW-capable player, while keeping the RAW data-structure intact. The internal RAW information is only lost if the AVI file is transcoded to another format or rendered out in a non-CineForm RAW™ compatible application.

 

Can I record into a Quicktime format for my Final Cut system?

Yes.

 

What is the decode quality of CineForm RAW™?

CineForm RAW™ decodes to a 4:4:4 RGB data image at 10-bits per channel. It has a 32-bit floating point internal processing engine, so any color or "look" metadata applied through the IRIDAS color-management pipeline is completely non-destructive, having support for both over-whites and sub-black areas. Look files can be swapped out in post, utilizing the same RAW source data for unlimited flexibility in shaping footage to the creative desires of the project.

Silicon Imaging has spent extensive time developing one of the best demosaicing algorithms in the industry. Our non-linear approach compensates for the myriad of bayer interpolation artifacts that other algorithms exhibit

NOTE: For more information about CineForm RAW™, please download a detailed white-paper here.

 

What is the difference between 12 bit uncompressed & the Cineform RAW codec, in terms of data size, image quality & editing options? Also the differences between recording in Quicktime or AVI format?

The 12-bit uncompressed at 2K/24P runs at approximately 84MB/s. It's counterpart in CineForm RAW™ at the highest quality setting (Quality 4) can typically vary from 18.5-20MB/s (3.5:1 compression), with the higher data-rates typically coming from scenes with higher noise content (for instance, we've seen as high as 30MB/s+ when at 12db gain). CineForm RAW is stored in a 10-bit LOG format, due to the benefits of first applying a gamma correction before compression. 12-bit uncompressed on the other-hand is stored in a photometrically linear format, meaning that value of each pixel corresponds in a 1:1 linear fashion to the amount each pixel on the sensor was exposed to the scene illuminant.

The image quality of CineForm RAW is what we term "visually lossless", meaning at the highest quality settings, and using the same demosaic algorithms, one could not visually tell (without the aid of electronic manipulation) the difference between footage aquired with 12-bit uncompressed vs. CineForm RAW. David Newman (CTO of CineForm) on his blog has posted more information about what "visually lossless" means in real-world terms, but suffice to say, at only 3.5:1 wavelet compression, we are exceeding the quality of of HDCAM-SR. So you could imagine the SI-2K as having a HDCAM-SR deck strapped to the back if you wished, the only difference is that we're recording RAW direct to a small 2.5" format hard-drive, while HDCAM-SR records a RGB or YUV format on a larger 1/2" format tape with associated deck. But if HDCAM-SR could record RAW, then the visual quality would be about the same, as HDCAM-SR is another format that is considered "visually lossless".

As far as post options, CineForm allows the ability to drag-and-drop footage into NLE's or any application that is QT or AVI compatible. Our uncompressed formats on the other-hand must go through a render process into another format that can be edited in an NLE. Current applications that support the conversion of our uncompressed file format include SpeedGrade DI/HD, and SiliconDVR itself, which can re-wrap uncompressed files into Adobe's popular DNG file format.

 

What is "Adaptive Recording" mode, and how does it affect image quality?

Adaptive recording mode watches the state of the RAM buffer and changes the CineForm RAW encoding bit-rate and quality settings in order to prevent RAM buffer overflows. Since CineForm is a variable bit-rate wavelet compressor, the amount of CPU usage required for encoding changes with scene content. Scenes that are high in visual "complexity" require more CPU than scenes with less complexity. This is especially true in the case of noise. As a result, with a fixed encoding quality setting, the RAM buffer can overflow when the scene complexity exceeds the ability of the CPU to encode at the target bit-rate/quality setting.

There are four stages in the adaptive recording mode: Quality 4 through Quality 1. All four quality modes are 10-bit, so there is no loss in pixel precision or dynamic range with each encoding mode. The encoding quality ranges from 3.5:1 compression at Quality 4 to 10:1 compression at Quality 1. Due to the nature of wavelet compression, and the lack of visual artifacts, a "shift" should not be present in the footage when the quality modes change. Each mode should remain "visually lossless" during general shooting conditions, although visual effects work should be done at the higher-quality settings. Even at 10:1 compression for quality level 1, the SI-2K is still the highest per-channel bit-rate recording wavelet camera on the market.

The adaptive recording mode will dynamically change the compression to meet the scene complexity, maximizing the quality setting while preventing the RAM buffer from overflowing. So if the camera changes to Quality level 1, it will, when presented with a scene that can be recorded at Quality 4, increase the quality back up that higher setting. After a recording is finished, the last recording quality setting is used as initial starting point. Should a scene have changed radically, the recording will very quickly go back up to Quality 4 once the recording starts back up. You will not be "stuck" at Quality 1 if the scene can be encoded at a higher quality setting.

In addition to Adaptive Encoding mode, the SiliconDVR can be set to encoding at each of the fixed quality-rate modes.

 

What are the compression settings for each of the CineForm RAW recording quality modes?

    Quality 4) 3.5:1, or approximately 18-20MB/s from a 2K/24P stream at 0db gain.

    Quality 3) 5:1 compression, or approximately 15-16MB/s from a 2K/24P stream at 0db gain.

    Quality 2) 8:1 compression (approx), or approximately 10MB/s from a 2K/24P stream at 0db gain.

    Quality 1) 10:1 compression, or approximately 8MB/s from a 2K/24P stream at 0db gain.

As noted above CineForm is a high-quality wavelet-based codec, and it's compression scheme is more efficient per a given compression ratio than DCT codecs. When comparing to other common acquistion codecs realize that HDCAM-SR 440 RGB and D-5 are both DCT-based and are 4:1 or 5:1 compression codecs. Also due to the variable bit-rate nature of CineForm RAW, higher-noise and complexity in scenes will produce higher bit-rates.

 

How do I pass .look files between SiliconDVR and SpeedGrade OnSet?

We've created a step-by-step tutorial video demonstrating the integration between SiliconDVR and SpeedGrade OnSet. You can view this video and other training videos in our workflow section of the website.

 

I've taken my AVI/QT files to a new machine, and the "look" is gone? What do I do now?

You need to "register" the .look file by having a machine with CineForm RAW installed (either using the free NeoPlayer from CineForm, or one of the other cross-platform Neo or Prospect packages). Registering a .look is done by taking the .look files for that project (which are located in the same recording directory as the AVI/QT files), and simply double-clicking the .look file.

 

What is the best .Look file to use for green-screen work?

For successful greenscreen keying, it is important that the screen itself is lit properly, and has a good clean exposure that minimizes the noise on the screen, which when too high, can hinder the effectiveness of the keyer algorithm later in post. Since the camera records a 10-bit LOG file, which by it's very nature pushes middle-grey down the overall dynamic range of the image, exposing by eyeballing from the actual 10-bit LOG file with no .Look file applied will cause the green-screen to not be optimally exposed for minimal noise levels. The Default.look file that comes with SiliconDVR uses the REC-709 curve, and as a result, only has 2.5 f-stops of over-exposure latittude before clip, but as a result of the minimal over-exposure room, it does allow the green-screen exposure to be pushed-up on the overall dynamic range of the sensor, reducing the noise on the screen, which will in-turn give the best visual results in post. The lack of over-exposure room from the REC-709 curve is typically not a problem on a green-screen set due to the ability for the DP to control the light levels and contrast of the scene.

In addition to using the Default.look as a basis for any .Look files the user creates for green-screen work, additional noise reduction can be obtained by shooting with daylight-balanced light-sources vs. tungsten. This is because the sensor is "natively" balanced towards daylight, and this will keep the blue-channel from accumulating as much noise as tungsten light sources would produce, which would require high blue gains in order to properly white-balance the image.

 

Why am I seeing a red screen in embedded Iridas OnSet and a red-and-white checker-board pattern in SpeedGrade OnSet when I open a .look file?

The solid red screen or the red-white checkerboard comes from not having a .cube, .ilut, or .itx file installed in a location that IRIDAS can see to generate or modify the .look file.

These .cube, .ilut, and .itx files must be installed inside of /Program Files/IRIDAS SpeedGrade OnSet/LUTs/ for SpeedGrade OnSet and inside of /Program Files/Silicon Imaging/Silicon DVR/Data/Iridas/LUTs/ for SiliconDVR OnSet embedded to access them.

Also keep in mind there are extra default LUT's that come with SiliconDVR and are they not installed in the SpeedGrade OnSet folder (nor do they come with the default SpeedGrade OnSet installer). Therefore if you desire to modify a file inside of SpeedGrade OnSet that utilizes these extra LUT files, you need to make sure that these files can be accessed by SpeedGrade OnSet. This is done by placing them in /Program Files/IRIDAS SpeedGrade OnSet/LUTs/.

These LUT files are specific to IRIDAS products, so they do not need to be carried around to any other applications that you will use in the post chain. Outside of SpeedGrade OnSet, the CineForm engine *only* looks at the final .look file. If you open a .look file, you'll see in the XML a tag called<data></data>. That is containing the 3D LUT that CineForm is referencing. So everything that CineForm needs is inside the .look file.

With trying to modify the .look inside of SpeedGrade OnSet though, since you're now trying to modify the original .look file, it's no longer using the information in the <data></data> tags, and instead is trying to regenerate the .look from the source parameters. You therefore have to have one of these source LUT's (i.e., the .ilut, .cube, or .itx) in order to regenerate the .look file, hence the reason for the red-checker pattern.

 

What is the point of these .itx, .cube, and .ilut files? Why do I need them in the first place?

These LUT files are typically used for pre-calibration of the image inside of SpeedGrade OnSet in order to align the RAW data from the camera into a specific color-space that creative "looks" can then be properly applied to.

For instance our imaging pipeline inside a .look file may look something like this:

RAW File (pre-white-balanced, but camera native color-space) -> Calibration to a "base point color-space" through the "Matrix & LUT" tab using LUT's and a matrix or maybe just a single 3D LUT -> Creative color correction -> Output LUT (this last stage is optional . . . could be print emulation) by adding another LUT shader as the last shader layer in SpeedGrade OnSet.

The power of a 3D LUT system like the IRIDAS .look file is that it can concatenate all these transformations into one LUT that the camera then uses to create the WYSIWYG "look" from the RAW camera data in real-time on the camera display, and embed in the AVI/QT as metadata that you take into post for further manipulation. Since the .look file is a 32-bit float representation of the color-correction, no information from the RAW file is clipped.

In cases where more than one LUT is required (such as the output LUT stage described above), using more than one LUT creates a compouding effect in that the output of one LUT becomes the input into the next LUT that is further downstream. Since the LUT's are floating point, they can reverse each other but it's easiest to think of this process as a node-type approach just as any other filtering system where you add one node after the other (i.e., Shake), and the output of one node is the input to the next one.

 

Why when I load a .Look file does the display screen go black?

Make sure that you have loaded a 64x64x64 LUT into SiliconDVR, as that is required for display use. While an unmodified version of SpeedGrade OnSet makes 8x8x8 LUT's as default, the image quality created from them is too poor for viewing any color transform that includes a gamma-correction component or other non-linear color transform without causing serious banding and other image quality artifacts. So make sure when loading a .Look file that the size of the 3D LUT inside the XML file is a 64x64x64 cube. The easiest way to tell is by the size of the XML file. It should be around 6MB for an XML that has a 64x64x64 LUT embedded in it. If SpeedGrade OnSet is generating 8x8x8 LUT's, go into the settings panel by hitting 's' on the keyboard, and then changing the size of the .Looks created in the '.Look' section.

 

How do I modify a .look file inside of Premiere Pro or other application using CineForm RAW™?

Because CineForm RAW is just using the 3D LUT information inside the .look file, which is a "flattened" final output from the SpeedGrade OnSet process, there is no additional metadata needed, or accessible from inside Premiere Pro. You can't modify .look files inside of Premiere Pro. CineForm is just allowing you to modify the color metadata information that is placed into our AVI/QT files. This pipeline goes as follows:

    RAW (no color-correction) -> White Balance -> Matrix -> 3D LUT.

The matrix transformation in this case is actually an alternate internal CineForm function, and should not be altered for normal purposes except for slight tweaking to the image such as exposure adjustment. The rest of the functionality in the matrix such as saturation, etc. can be wrapped into the .look file itself. The main things to be concerned about are the White-Balance and the Look sections. You can change the white-balance using the slider, and that information is then passed as the input into the .look file (3D LUT). Please note that the RAW file information being passed into the SpeedGrade LUT is typically white-balanced first. CineForm RAW gives you the controls to re-do the white-balance, and white-balance is decoupled from the 3D LUT. This was done so that the end user will not have to be concerned with .look files all tied to one specific white-balance setting, or requiring multiple versions of .look files all giving the same "look" at different white-balance settings. Instead, one can simply change the white-balance in the camera like what would normally happen on any other camera, and the 3D LUT will give a similar "look" because it's getting passed the same pre-white-balanced information. Basically the white-balance is serving as a pre-calibration, or pre-normalization step, so once normalized data is passed into the 3D LUT, it should behave in a similar fashion no matter what the white-balance is.

 

 

What is the typical data rate of recording in 1080/24P?

The data rate of the uncompressed RAW stream from the camera head in 12-bit, 1080/24P format is 74.6MB/s. This is compressed at our highest quality mode setting to an average 20MB/s CineForm RAW™ data stream, for a compression ratio of 3.5:1. Because of the efficiency of wavelets and the variable bit rate encoding of the CineForm RAW™ codec, the visual quality is better than a DCT codec running at 4:1 or less compression ratio.

 

Who makes the hard-drive carrier for the SI-2K, and what drive types does it support?

Our drive carrier fits in a standard 3.5" drive enclosure defined by the ATX standard. It is currently a device made by CRU-dataport. CRU has been making removable drive enclosures not only for the commercial sector, but also for the government, military, police, and other high-demand physical industrial environments. The drive carrier itself can take any 2.5" SATA, including off-the-shelf HDD's from Segate, Hitachi, Toshiba, etc., as well as SSD's (solid state disks) from SanDisk, Samsung, and PNY.

Additionally, any other USB 2.0 compatible device with enough bandwidth can be connected to the camera as a recording device.

 

What HDD's can be used inside the enclosure? Is there any drive size limit? How does the HDD enclosure connect to a PC?

The SI-2K enclosure is compatible with any 2.5" SATA format (this includes solid-state drives that comply with those specifications). There is no drive-size limit other than what is available on the market in 2.5" drive formats. The standard enclosure has a USB port on the outside, that once you take the enclosure out of the SI-2K, you can plug and power the drive enclosure from the USB drive when it's hooked up to a PC. You have to first eject the drive enclosure before you can connect it to another PC, as there is a circuit inside that disables the external drive-enclosure's USB port when it's connected to the camera.

 

Can I record directly to a NAS?

In most cases yes, the 20MB/s of the CineForm RAW™ AVI should easily pass across a typical 1000Base-T network. This being said, the quality of the network, and it's performance latency will be the deciding factor on whether recording can keep up. While frames will not be dropped, the RAM buffer will fill up if the data-bandwidth is not high enough.

 

My computer crashed during a record, or the camera lost power. Did I loose all my footage?

No. We now have a tool that can provide users a way to retrieve AVI files that are incorrectly recorded due to a crash or power outage.

 

I have files that won't copy over from the magazine in the SI-2K to my external hard-drive. What's wrong?

Files should seamlessly copy over to an attached USB hard-drive. Make sure you are not using a FAT32 drive, as any files over 4GB will not copy over, and could cause windows to abort any batch-copy process that was already in-progress. Many off-the-shelf third-party USB hard drives that are advertising cross-platform Mac OSX and PC support are FAT32. All drives working with the SI-2K need to be NTFS in order to support the often larger than 4GB file sizes of CineForm RAW, especially on longer shots.

 

 

Does SiliconDVR support audio recording? What is the quality?

Yes. Audio support is at DAT-level recording quality, that is 16-bit, 48Khz. For higher quality audio, an external audio recording device is needed. The SI-2K also has a L/R balanced line input and output for on-board audio recording support without the need for an external USB audio input adapter.

 

How do I synchronize my dual-sound audio with camera time code?

As a default, SiliconDVR supports free-run time-of-day timecode. For accurate syncing of the camera to an external clock device, we support using USB-compatible LTC-timecode readers from Adrienne Electronics Corp.

Other methods of syncing multiple clock devices include using a visual timecode slate marker synced to the external timecode master clock , and matching to the time-of-day timecode inside SiliconDVR during editing.

 

How accurate is the internal clock of the SI-2K MINI?

The internal clock accuracy is 3ppm. This is double the accuracy of the SMPTE spec of 10ppm which specifies that any two devices must be within +/- 1 frame of each other over an hour.

 

Is there any audio level adjustment or phantom microphone power available?

No, there is only a line-level input, and it's set for maximum dynamic range (or voltage swing). To control the levels on the microphone, you should use an analog mixer with line level output, or a mic-preamp and adjust the gain settings on the pre-amp. Alternatively you can use a USB audio interface rather than the line inputs. Line inputs were selected over mic-inputs in order to get the maximum audio quality into the camera system. We were not able to secure the means to develop a quiet and properly shielded mic-pre-amp system that would have delivered the audio quality demanded by the professional community. Line level inputs allows us to make sure that users are able to get the audio quality they need through the user of high-quality analog front-ends.

 

I have files that I need to send to a client in CineForm format. Where can I download the free CineForm codec?

You can download the free codec here on the CineForm website.

 

 

What types of files can be placed together on the Premiere Pro timeline in real-time without a render (or red bar) during "preview" playback mode?

Both CineForm RAW™ AVI files and CineForm HD files can be combined on the same timeline, with the same effects applied in real-time without the need to render. CineForm HD files include files ingest via HD-SDI sources or HDV sources.

 

Why do I have to render my timeline in "edit-to-tape" playback mode?

CineForm RAW™ files are just that: RAW bayer data compressed using CineForm's "Visually Perfect" wavelet codec technology. As such, these files must go through a demosaic process in order to render as full-resolution video. During a demosaic process, the RAW information is converted to RGB or YUV data, and once converted, the RAW data-structure is lost. Additionally, depending on the format, files will be anywhere from 2 to 3 times the original RAW size. Demosaic algorithms can vary, with the simplest being a quadlet approach, where a 2x2 square of the bayer pattern is taken, the greens are averaged to generate the green channel (or in some cases just one of the greens), and the red and blue in the 2x2 area are taken to generate the red and blue components of a pixel. This is a very fast demosaic process, and in Premiere Pro, FCP, and Prospect 2K, during timeline playback of the footage, a quadlet demosaic is used for real-time preview performance with effects and mixing/matching of YUV data on the timeline. By maintaining the integrity of the RAW data, and using a quadlet preview for playback, the end user has the functionality to work with the original untouched camera footage for the maximum amount of flexibility in post possible. When the play head is parked, a complete demosaic process occurs that renders out a full resolution image. Since these complicated non-linear demosaic algorithms cannot playback in real-time, they must be rendered before being mastered to tape. The advantage to this approach is that even after the render, the original RAW data sources are never altered, and the end-user, if they wish, can return to the RAW data for additional adjustments, better demosaicing methods, and other operations that can maintain higher quality since they are starting from the original sensor data. Using the quadlet approach for preview mode playback maintains the real-time performance needed to make critical creative decisions and defer any rendering to the final export-to-tape which typically happens after all the creative decisions have been made. A detailed white-paper presentation of this workflow can be found here.

 

Why do my files playback at quarter resolution in Windows Media Player, Quicktime Player, and other media applications?

These files were encoded with an pre 3.1 version of the CineForm Codec. In order to get full-resolution files, please download and run the AVIHeaderUpdate on these files. Once the header structure has been updated, and you are running either Prospect 2K, Neo 2K, or Neo Player 3.2 on the host machine, the files will playback at full-resolution.

 

How do I change the metadata in Premiere Pro?

First, make sure you've "registered" the .look file on your machine by double-clicking it. Additionally, if you do a recording with a .look file in SiliconDVR, the .look is already registered. The .look's that ship with SiliconDVR are in the /Program Files/Silicon DVR/Data/ folder. Normally you wouldn't have to manually register these files because once you record with one (and all recordings have a .look associated with them), it's available as a registered .look file on your machine. You can see what .look files have been registered on your machine by going to /Program Files/Common Files/CineForm/LUTs/.

After registering the .look files by either recording an AVI/QT or manually registering the .look (such as will be the case if you've imported the .look to another machine for editing), go into Premiere Pro, and place a clip on the timeline. Then in the program window context menu (the small arrow in the upper right-hand corner of the program window), go to "playback settings". In the lower-left-hand corner of the screen there is a choice called "File Metadata". This gives you control over changing the .look associated with the file, and will give you a list of all the registered .look files on your machine to swap-out the current .look with. You will also have the choice of turning the look on or off for that file, and finally changing the white-balance.

 

How do I access the metadata controls in After Effects?

Using either the Neo family or Prospect family of products, going to the Start Menu->CineForm->Tools will bring up a number of quick-launch scripts for setting the global metadata controls at the OS level, which After Effects will honor. For instance, if you would like to turn the .Look off, but still keep the white-balance on the file, select the "White Balance Only" script. When you return to After Effects, the RAW files will now only render the white-balance with the .Look file turned off. After Effecst by default uses the higher resolution CF Advanced demosaic, so there is no need for a demosaic choice selection.

 

My file in Premiere Pro looks soft and has color-fringing when I park the playhead. What's wrong with my clip?

The default demosaic setting in Premiere Pro is only a simple bilinear demosaic, and this was done for a fast interactive editing experience. To change to a higher-quality demosaic that is sharper and has much less color friging issues from false demosaic results, go to the "Playback Settings" in Premiere Pro, and under the "Global Preferences" tab, select the CF Advanced demosaic options.

 

Why do my files have a gamma shift when I select 32-bit floating point mode in After Effects?

This "gamma shift" is actually a re-linearization of the CineForm RAW™ file, returning the AVI from a 10-bit LOG source back to the original linear format (in this case though it's not necessarily 12-bit linear, but rather has been expanded into a 32-bit floating point environment). For optimum results in a 32-bit floating point environment, you want to work with photometrically linear footage, not gamma-corrrected footage (or "video encoded", which often has the misnomer of being called "linear"). This results in better image processing math and true real-world compositing operations that behave similar to natural-light phenomena. To get back to the original gamma of the image, apply a levels filter and adjust for a gamma of approximately 2.2.

 

Do I need to purchase Adobe and Prospect4K to Edit CineForm RAW™ footage in Final Cut Pro?

No. CineForm has released Neo4K Edit for OSX that enables RAW editing within Final Cut Pro.

 

What is the proper sequence settings for real-time playback of CineForm RAW™ files inside of Final Cut Pro?

Inside of FCP 6.0, if you drop a file onto a sequence, the sequence should prompt the user to set the sequence settings to the attributes of that clip. When prompted for this, you should click "yes". When the file is dropped on the timeline then, a grey-bar should appear above the clip, showing that the file is "native" to that timeline. Should a yellow or red bar appear above the clip, then the timline is not setup properly for the CineForm codec.

For a screen-shot of what a properly setup sequence would look like, click here. Keep in mind that this was a 2K/23.976P shot, so other formats would have different frame-rates and resolutions, but the common focus would be that every timline would have the same CineForm codec as the baseline codec of the sequence.

To make sure that your sequence is properly setup for 32-bit floating point processing, make sure under the "Video Processing" tab that the sequence is set to "Render all YUV material in high-precision YUV".

Do I need to shoot with QT or AVI when using Final Cut Pro?

QuickTime files are recommended for FCP/OSX-based workflows, and while CineForm AVI's are supported inside FCP, the following contingencies can arise:

    1) Timecode is not supported in AVI in FCP (just the nature of FCP). Timecode can be added within FCP, but would tediously have to be added by hand for each clip.

    2) If you render any effects in FCP, or export any footage via the "Export" options, you will get a brighter gamma shift of 0.2.

    3) If you do any conversions in compressor from AVI to an Apple-based YUV format like ProRES, etc., you will get a brighter gamma shift of 0.2.

    4) Even if you select "Render all YUV material in high-precision YUV" in the sequence Video Processing tab, FCP will only recognize the AVI files as 8-bit files when rendering effects. QuickTime files on the other-hand can take advantage of the 32-bit floating point engine of FCP. One work-around for this is to use Automatic Duck to finish your FCP timeline inside of After Effects, which will recognize the full 10-bit LOG data in the CineForm RAW™ files.

One thing to keep in mind is that if you shoot AVI in the camera, and then re-wrap to QuickTime using HDLink, that will erase (zero-out) any embedded timecode and the embedded metadata inside the AVI file. The QuickTime files generated from the re-wrapping process will be usable in OSX and avoid the four issues mentioned above, but the .Look file will need to be re-applied to the footage, and any timecode that was embedded in the AVI will no longer be available. In order to re-apply .Look files to re-wrapped QuickTimes, Neo version 3.3 or later is required.

 

How do I get footage into AVID from CineForm RAW™?

AVID poses a slightly difficult solution in that they don't support any third-party codecs, nor do their systems have any support for RAW formats. The easiest workflow is to take CineForm RAW™ files through Adobe After Effects and convert them to either DPX for an AVID DS|Nitris, or to DNxHD for any other AVID system. That way you can keep the 10-bit deep-pixel format of the CineForm RAW™ files intact (AE supports deep-pixel support where-as Premiere Pro's exporter does not), although you will lose support for 4:4:4 with the DNxHD option.

 

®Microsoft Windows XP, Adobe, Adobe Premier Pro, Cineform, ProspectHD, Silicon Imaging. All Rights Reserved. For more information please email us at HD@siliconimaging.com