Multimedia

OUDB Faculty Sponsor OUDB Members DB Research Links Publications Tools

Previous Page

In many areas of applications such as medicine, law enforcement, video game development, and web design, users may create new (derived) multimedia objects by editing existing (base) ones.  In order to save space, a derived object can be stored as the set of editing operations used to create it along with a reference to its base.  So, the binary format of the derived object does not have to be physically stored in the database.  When a user wants to retrieve such an object, the system accesses the referenced base object, then applies the associated editing operations on it.  This storage format is called a virtual image.

The goal of this project is to demonstrate that virtual images can be used to improve Content-Based Information Retrieval (CBIR).  Our project is a collaborative effort between OU and Baylor University, and has been funded in part by NASA.    

Knowledge about the editing operations stored in virtual images allows us to perform two aspects of CBIR more efficiently, feature extraction and similarity search.  In feature extraction, conventional multimedia database management systems generally extract the set of features used for querying from each object as it is inserted into the database.  This can be extremely time-consuming, especially if the features are extracted manually.  If the derived objects are stored as virtual images, however, we can use the semantic information in that storage format to determine the objects’ features.  Therefore, only the base images have to be analyzed manually.  To automatically extract features from any virtual image, we must determine the effects of all possible editing operations on the set of features used for querying.  Because of this, the set of editing operations used in our virtual images must be complete, which means that it can represent all possible transformations from one object to another.  In addition, the set should be minimal, which means that no subset of it is complete.  We have developed methods to test a set of image editing operations for these properties.

The second aspect of this project concerns improving similarity search using virtual images.  Finding the k-nearest neighbors of some query object involves making several distances computations. Calculating these distances can be very expensive.  Using virtual images, we can reduce the number of times that such expensive distance functions must be computed for the entire database.  An upper bound on the actual distance from a query object, Q, to a derived object, D, is the distance from D to its base plus the distance from D's base to Q.  D’s distance to its base can be determined directly from the editing operations stored in the virtual image syntax.  Specifically, each operation can be assigned a certain weight, so, the distance between D and its base is the sum of the weights of the operations contained in D’s virtual image syntax.  Our goal is to use such information to develop an algorithm for satisfying nearest neighbor queries that minimizes the number of times expensive distance functions are calculated.

 

For problems or questions regarding this web contact database@cs.ou.edu.