Skip to main content

Cultural Heritage FAQ's

Can I get a system today?
Systems are currently being configured for qualified users and will be available soon.  System purchase is not required to take advantage of the system.  Services to capture and assess images from a single document to a large collection are available. We would be happy to discuss your needs.

How large an image can be captured?
Three things limit the size of a scene (document) that can be captured in one frame:
1.  How much light is available.
2.  How many pixels are available.
3.  Optics, together with how much working distance is available.
Two light panels are enough to uniformly illuminate a scene of about 50cm X 60cm (18” X 24”).  If this scene is captured in one frame, the captured resolution will be about 12 pixels/mm (300 ppi).  Greater resolution may be achieved by imaging smaller segments and stitching segments together.   600 ppi is a practical resolution that may be achieved with excellent optical resolution at reasonable working distance; e.g., a 120mm macro lens will provide good 600 ppi images at about 1 meter working distance.
An optional motorized stepping table can be provided for conveniently moving large work pieces whose image segments which will be stitched together.

Depending on choice of camera and number of spectral bands, spectral image sets can typically vary in size from a few tens of Megabytes to a few ones of gigabytes.   While large, it is not too large to be captured, processed and analyzed efficiently on most recent computer systems. 

Talk about Calibration
Extensive component, system and image calibration distinguishes MegaVison systems.
Components are calibrated  and characterized  prior to system assembly.  During operation, pixel-by-pixel dark field, pixel-by-pixel white field, neutral balance, and color calibration are ongoing processes integrated with image capture. 

To obtain intensity and color uniformity over the scene, a diffuse uniform white reference target (such as scintered PTFE [Spectralon], barium sulphate paint, matt ceramic, or, less controlled but still useful, a high quality white paper) that covers the scene is recommended.  An image is captured of the white scene, and pixel-by-pixel gain adjustment is performed for each spectral band.  

For color validation, we encourage the use of X-rite Color Checker pigment targets laid along the edge of every picture. MegaVision makes custom targets for this purpose. 

For calibration of neutral balance, we typically use an equal energy white such as Spectralon. We expand our 3-color white balance and color calibration software (MegaVision pioneered the concept of white balance over 20 years ago) to encompass N-colors.
PhotoShoot includes tools to facilitate measuring and adjusting illumination uniformity over the scene.

If budgets allow, additional calibration is possible.   For example, MegaVision’s aerial customers perform and exhaustive radiometric and geometric camera calibration...they actually map the warping of the sensor surface, calibrate the lens radiometric fall-off, and characterize its geometric distortion. It is possible to apply this process to the EV System.  This is an expensive option, but if the need arises, it is possible.
LED's are quite consistent, close to manufacturer specs, but there is measurable variation. We certify all spectral bands of each light panel with an Ocean Optics spectrometer.   Additionally, wavelength requirements can be specified by the user when ordering SpectraPalette™ illuminators.

How close is the EV System to being a spectrophotometer? 
A spectrophotometer can record response over a large number of narrow spectral bands.  However, a spectrophotometer evaluates only a single spot: i.e., 1 pixel resolution.   The EurekaVision System, on the other hand, evaluates millions of spots;  50 million for the E7.  But practical considerations limit the system to 20 or so primary spectral bands.

Is there any method to verify that spectral bands are precisely the same bandwidth as last night or last week? 
Each panel is serialized and the serial # is read by PhotoShoot.  Calibration data is tied to Serial numbered panels, so calibration information for each panel may be tracked.  By taking pictures of an equal-energy white reflectance calibration target on regular occasion, calibration can be easily tracked.  Color calibration profiles for each illuminator-camera combination can be acquired and utilized for post-processing as well.

Discuss the lighting setup
A light source perpendicular to scene on the optical axis will tend to eliminate surface texture and increase flare, especially from specular surfaces such as illuminated parchments, which often use gold foil. So 45º incidence is the recommended lighting angle.  We can also supply raking incidence illumination, in which light grazing the surface can enhance surface testure or depth variation of a surface.

How does PhotoShoot control the lights?
While there will typically be two panels, one for the left side and one for the right side, PhotoShoot can control a great deal more. Support for up to 14 Panels is will be normally available. Each Light panel is assigned an HID enumerator by the host computer operating system.

PhotoShoot interrogates all lights and evaluates the responses from them. The model, version and serial #”s are reported in the Camera Status window. If PhotoShoot is set to Scan on Startup, PhotoShoot will scan all devices each time it launches, interrogate each device and enable recognized devices for use with PhotoShoot. The user will recognize each light panel by its name and unique number reported by the Panel.

Since the each light panel has a unique identifier that is discovered upon polling by the PhotoShoot, a unique name can be affixed to the exterior of the light panel whereby the user can know which panel is being adjusted in software.

Should the user choose to add additional panels or unplug a panel (either intentionally or unintentionally) while PhotoShoot is running, a user can update port assignments from within PhotoShoot simply by pressing a button in the “Serial Status” tab in the Camera Information Dialog box accessed under the Setup/Capture/Camera Status pull-down menu. This will scan all ports and assign recognized devices for use.  The user need not be concerned with the ports to which the devices are attached, only with the unique identifiers of the recognized devices attached to the ports.

What controls are available for the light panels?
Basic controls via USB are in PhotoShoot: What spectral band on what panel turns on, when it turns on, and how long it stays on.   An exposure table, called the N-Shot table is created that defines all the lighting and shutter (and optional color wheel) actions that will occur during an N-shot capture.  “N” is the number of shots of a scene, typically 12 but can be more or less. Various tables may be created for various capture requirements.  Once a table is created, capturing an N-shot series is a simple matter of pressing PhotoShoot’s shutter release button.  Multiple shots are automatically captured, named, corrected and saved.  The lighting setup of each shot is automatically invoked.

Photoshoot will control power as well as duration. Photoshoot also allows turning on multiple wavelengths to illuminate with white light of various color temperatures with Color Rendering Index values in the high 90's. SpectraPalette firmware can be updated as new features are introduced.

Will each color be discrete or are mixtures possible?
Mixtures are possible in any combination.

How about backlighting and opacity?
Backlighting can be used; correlating back-lit images with front-lit images gives more clues to what lies beneath (or within translucent materials).

Who would use a hyperspectral picture system?
Individuals and institutions responsible for reproduction, conservation, and preservation such as conservationists, archivists, museums, and libraries will find considerable use.  Additionally, collectors of valuable objects such philatelists and numismatists seeking  to assure value and provenance  of objects.

Do we have PCA or other quantitative postprocessing algorithms in Photoshoot? 
While the EV System approach was designed to provide images well-suited to sophisticated postprocessing techniques, we don't include such post-processing tools in PhotoShoot.  We are focused on capture-look-organize-save.  Our efforts are to provide images of sufficient quality, documentation and provenance to enable a wide range of post processing.  PhotoShoot images are compatible with image analysis software such as ENVI and Image J.

Explain a little about the camera itself
MegaVision has built lots of E7 backs with the 50 megapixel CCD for other markets.  Most fly in airplanes and take geo-referenced pictures of stuff on the ground.  Some fly in tactical theater.

We do the dark and gain adjustments on the fly.  User doesn't need to get sullied.

Because the pixels are getting pretty small (6 microns on the 50 megapixel E7), and because adjacent pixel response to light  is 100% (i.e, a pixel doesn't care what it's neighbor sees), and because there is no need to low-pass filter the light to reduce Bayer pattern under-sampling artifacts, the choice of lens is critical.  The camera is completely predicated on choice of lens.  The best lenses for typical EurekaVision System applications are available with copal 0 sized mounts, so that means we need a camera which accepts a copal 0 sized lens/shutter.  We built a USB interface to the Schneider digital shutter, which is a copal 0 sized shutter that will accept copal-mount lenses and let us control the aperture and shutter speed from the computer.  The shutter is highly reliable with extremely low vibration so as not to contribute to shake and affect image registration. In most applications, the shutter is opened, the computer controled light duration supplies the exposure, and the shutter closes, avoiding the exposure vagaries common to mechanical shutters.

We can mount the lens/shutter to any lens board on any view camera.  A unit like the Mamiya RZ or the Fuji GX 680 could work, since we usually don't need the swings and tilts of a view camera.  Unfortunately, such units don’t accept standard lens-board-mounting copal shutters and lenses.    View cameras and other technical cameras, such as the Toyo VX 23D and Linhof M679 with fine focus adjustments work well. Our favorite host camera is the Novoflex Balpro, which does not have swings or tilts to cause optical error, and will allow approximately 2000 ppi with the standard bellows length.

What about file formats?
PhotoShoot creates files in various formats: .DNG (Adobe's Raw "Digital Negative" file format), TIFF, JPEG.
PhotoShoot always captures raw files, which it can develop into TIFF (8 or 16bits/color) or JPG, or export as raw DNG's to many DNG compliant applications. Photoshoot TIFF files are encoded asLab color files.

Developing a DNG file which is from a monochrome sensor is much different than developing a DNG file from a color sensor.  No interpolation is required for the monochrome sensor image, so the developed image retains the purity of the raw file.

Talk a little more about PhotoShoot.
PhotoShoot is just what its name implies; it's an application for capturing pictures.   PhotoShoot intimately couples with the camera and the lights to enable precise control of each.  PhotoShoot’s tool box enables optimization of the scene, light, lens, aperture, shutter, and digital back.  PhotoShoot controls the capture, references the image data to objective imaging standards, and enables rapid and convenient inspection of captured images to verify image quality.

Metadata in the header of the picture file facilitates organization, naming, and searching.

We adapt standard (IPTC and EXIF) metadata formats and optimize the fields where necessary to store all kinds of searchable information into the header of each file.  File naming and organization can be automatically determined by metadata. 

A full year warranty is provided and extended warranty is available.