SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Lawal Najeem) "

Sökning: WFRF:(Lawal Najeem)

  • Resultat 1-50 av 59
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Abdul Waheed, Malik, 1981-, et al. (författare)
  • Generalized Architecture for a Real-time Computation of an Image Component Features on a FPGA
  • Annan publikation (övrigt vetenskapligt/konstnärligt)abstract
    • This paper describes a generalized architecture for real-time component labeling and computation of image component features. Computing real-time image component features is one of the most important paradigms for modern machine vision systems. Embedded machine vision systems demand robust performance, power efficiency as well as minimum area utilization. The presented architecture can easily be extended with additional modules for parallel computation of arbitrary image component features. Hardware modules for component labeling and feature calculation run in parallel. This modularization makes the architecture suitable for design automation. Our architecture is capable of processing 390 video frames per second of size 640x480 pixels. Dynamic power consumption is 24.20mW at 86 frames per second on a Xilinx Spartran6 FPGA.
  •  
2.
  • Ahmad, Naeem, et al. (författare)
  • A taxonomy of visual surveillance systems
  • 2013
  • Rapport (övrigt vetenskapligt/konstnärligt)abstract
    • The increased security risk in society and the availability of low cost sensors and processors has expedited the research in surveillance systems. Visual surveillance systems provide real time monitoring of the environment. Designing an optimized surveillance system for a given application is a challenging task. Moreover, the choice of components for a given surveillance application out of a wide spectrum of available products is not an easy job. In this report, we formulate a taxonomy to ease the design and classification of surveillance systems by combining their main features. The taxonomy is based on three main models: behavioral model, implementation model, and actuation model. The behavioral model helps to understand the behavior of a surveillance problem. The model is a set of functions such as detection, positioning, identification, tracking, and content handling. The behavioral model can be used to pinpoint the functions which are necessary for a particular situation. The implementation model structures the decisions which are necessary to implement the surveillance functions, recognized by the behavioral model. It is a set of constructs such as sensor type, node connectivity and node fixture. The actuation model is responsible for taking precautionary measures when a surveillance system detects some abnormal situation. A number of surveillance systems are investigated and analyzed on the basis of developed taxonomy. The taxonomy is general enough to handle a vast range of surveillance systems. It has organized the core features of surveillance systems at one place. It may be considered an important tool when designing surveillance systems. The designers can use this tool to design surveillance systems with reduced effort, cost, and time.
  •  
3.
  • Ahmad, Naeem, et al. (författare)
  • Cost Optimization of a Sky Surveillance Visual Sensor Network
  • 2012
  • Ingår i: Proceedings of SPIE - The International Society for Optical Engineering. - Belgium : SPIE - International Society for Optical Engineering. - 9780819491299 ; , s. Art. no. 84370U-
  • Konferensbidrag (refereegranskat)abstract
    • A Visual Sensor Network (VSN) is a network of spatially distributed cameras. The primary difference between VSN and other type of sensor network is the nature and volume of information. A VSN generally consists of cameras, communication, storage and central computer, where image data from multiple cameras is processed and fused. In this paper, we use optimization techniques to reduce the cost as derived by a model of a VSN to track large birds, such as Golden Eagle, in the sky. The core idea is to divide a given monitoring range of altitudes into a number of sub-ranges of altitudes. The sub-ranges of altitudes are monitored by individual VSNs, VSN1 monitors lower range, VSN2 monitors next higher and so on, such that a minimum cost is used to monitor a given area. The VSNs may use similar or different types of cameras but different optical components, thus, forming a heterogeneous network.  We have calculated the cost required to cover a given area by considering an altitudes range as single element and also by dividing it into sub-ranges. To cover a given area with given altitudes range, with a single VSN requires 694 camera nodes in comparison to dividing this range into sub-ranges of altitudes, which requires only 96 nodes, which is 86% reduction in the cost.
  •  
4.
  • Ahmad, Naeem, et al. (författare)
  • Model and placement optimization of a sky surveillance visual sensor network
  • 2011
  • Ingår i: Proceedings - 2011 International Conference on Broadband and Wireless Computing, Communication and Applications, BWCCA 2011. - : IEEE Computer Society. - 9781457714559 ; , s. 357-362
  • Konferensbidrag (refereegranskat)abstract
    • Visual Sensor Networks (VSNs) are networks which generate two dimensional data. The major difference between VSN and ordinary sensor network is the large amount of data. In VSN, a large number of camera nodes form a distributed system which can be deployed in many potential applications. In this paper we present a model of the physical parameters of a visual sensor network to track large birds, such as Golden Eagle, in the sky. The developed model is used to optimize the placement of the camera nodes in the VSN. A camera node is modeled as a function of its field of view, which is derived by the combination of the lens focal length and camera sensor. From the field of view and resolution of the sensor, a model for full coverage between two altitude limits has been developed. We show that the model can be used to minimize the number of sensor nodes for any given camera sensor, by exploring the focal lengths that both give full coverage and meet the minimum object size requirement. For the case of large bird surveillance we achieve 100% coverage for relevant altitudes using 20 camera nodes per km2 for the investigated camera sensors.
  •  
5.
  • Ahmad, Naeem, et al. (författare)
  • Model, placement optimization and verification of a sky surveillance visual sensor network
  • 2013
  • Ingår i: International Journal of Space-Based and Situated Computing (IJSSC). - 2044-4893 .- 2044-4907. ; 3:3, s. 125-135
  • Tidskriftsartikel (refereegranskat)abstract
    • A visual sensor network (VSN) is a distributed system of a large number of camera nodes, which generates two dimensional data. This paper presents a model of a VSN to track large birds, such as golden eagle, in the sky. The model optimises the placement of camera nodes in VSN. A camera node is modelled as a function of lens focal length and camera sensor. The VSN provides full coverage between two altitude limits. The model can be used to minimise the number of sensor nodes for any given camera sensor, by exploring the focal lengths that fulfils both the full coverage and minimum object size requirement. For the case of large bird surveillance, 100% coverage is achieved for relevant altitudes using 20 camera nodes per km² for the investigated camera sensors. A real VSN is designed and measurements of VSN parameters are performed. The results obtained verify the VSN model.
  •  
6.
  • Ahmad, Naeem, et al. (författare)
  • Modeling and Verification of a Heterogeneous Sky Surveillance Visual Sensor Network
  • 2013
  • Ingår i: International Journal of Distributed Sensor Networks. - : SAGE Publications. - 1550-1329 .- 1550-1477. ; , s. Art. id. 490489-
  • Tidskriftsartikel (refereegranskat)abstract
    • A visual sensor network (VSN) is a distributed system of a large number of camera nodes and has useful applications in many areas. The primary difference between a VSN and an ordinary scalar sensor network is the nature and volume of the information. In contrast to scalar sensor networks, a VSN generates two-dimensional data in the form of images. In this paper, we design a heterogeneous VSN to reduce the implementation cost required for the surveillance of a given area between two altitude limits. The VSN is designed by combining three sub-VSNs, which results in a heterogeneous VSN. Measurements are performed to verify full coverage and minimum achieved object image resolution at the lower and higher altitudes, respectively, for each sub-VSN. Verification of the sub-VSNs also verifies the full coverage of the heterogeneous VSN, between the given altitudes limits. Results show that the heterogeneous VSN is very effective to decrease the implementation cost required for the coverage of a given area. More than 70% decrease in cost is achieved by using a heterogeneous VSN to cover a given area, in comparison to homogeneous VSN. © 2013 Naeem Ahmad et al.
  •  
7.
  • Ahmad, Naeem (författare)
  • Modelling and optimization of sky surveillance visual sensor network
  • 2012
  • Licentiatavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • A Visual Sensor Network (VSN) is a distributed system of a largenumber of camera sensor nodes. The main components of a camera sensornode are image sensor, embedded processor, wireless transceiver and energysupply. The major difference between a VSN and an ordinary sensor networkis that a VSN generates two dimensional data in the form of an image, whichcan be exploited in many useful applications. Some of the potentialapplication examples of VSNs include environment monitoring, surveillance,structural monitoring, traffic monitoring, and industrial automation.However, the VSNs also raise new challenges. They generate large amount ofdata which require higher processing powers, large bandwidth requirementsand more energy resources but the main constraint is that the VSN nodes arelimited in these resources.This research focuses on the development of a VSN model to track thelarge birds such as Golden Eagle in the sky. The model explores a number ofcamera sensors along with optics such as lens of suitable focal length whichensures a minimum required resolution of a bird, flying at the highestaltitude. The combination of a camera sensor and a lens formulate amonitoring node. The camera node model is used to optimize the placementof the nodes for full coverage of a given area above a required lower altitude.The model also presents the solution to minimize the cost (number of sensornodes) to fully cover a given area between the two required extremes, higherand lower altitudes, in terms of camera sensor, lens focal length, camera nodeplacement and actual number of nodes for sky surveillance.The area covered by a VSN can be increased by increasing the highermonitoring altitude and/or decreasing the lower monitoring altitude.However, it also increases the cost of the VSN. The desirable objective is toincrease the covered area but decrease the cost. This objective is achieved byusing optimization techniques to design a heterogeneous VSN. The core ideais to divide a given monitoring range of altitudes into a number of sub-rangesof altitudes. The sub-ranges of monitoring altitudes are covered by individualsub VSNs, the VSN1 covers the lower sub-range of altitudes, the VSN2 coversthe next higher sub-range of altitudes and so on, such that a minimum cost isused to monitor a given area.To verify the concepts, developed to design the VSN model, and theoptimization techniques to decrease the VSN cost, the measurements areperformed with actual cameras and optics. The laptop machines are used withthe camera nodes as data storage and analysis platforms. The area coverage ismeasured at the desired lower altitude limits of homogeneous as well asheterogeneous VSNs and verified for 100% coverage. Similarly, the minimumresolution is measured at the desired higher altitude limits of homogeneous aswell as heterogeneous VSNs to ensure that the models are able to track thebird at these highest altitudes.
  •  
8.
  •  
9.
  • Ahmad, Naeem, et al. (författare)
  • Solution space exploration of volumetric surveillance using a general taxonomy
  • 2013
  • Ingår i: Proceedings of SPIE - The International Society for Optical Engineering. - : SPIE. - 9780819495044 ; , s. Art. no. 871317-
  • Konferensbidrag (refereegranskat)abstract
    • Visual surveillance systems provide real time monitoring of the events or the environment. The availability of low cost sensors and processors has increased the number of possible applications of these kinds of systems. However, designing an optimized visual surveillance system for a given application is a challenging task, which often becomes a unique design task for each system. Moreover, the choice of components for a given surveillance application out of a wide spectrum of available alternatives is not an easy job. In this paper, we propose to use a general surveillance taxonomy as a base to structure the analysis and development of surveillance systems. We demonstrate the proposed taxonomy for designing a volumetric surveillance system for monitoring the movement of eagles in wind parks aiming to avoid their collision with wind mills. The analysis of the problem is performed based on taxonomy and behavioral and implementation models are identified to formulate the solution space for the problem. Moreover, we show that there is a need for generalized volumetric optimization methods for camera deployment.
  •  
10.
  • Alqaysi, Hiba (författare)
  • Cost Optimization of Volumetric Surveillance for Sky Monitoring : Towards Flying Object Detection and Positioning
  • 2022
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Unlike surface surveillance, volumetric monitoring deals with three-dimensional target space and moving objects within it. In sky monitoring, objects fly within outdoor and often remote volumes, such as wind farms and airport runways. Therefore, multiple cameras should be implemented to monitor these volumes and analyze flying activities.Due to that, challenges in designing and deploying volumetric surveillance systems for these applications arise. These include configuring the multi-camera node placement, coverage, cost, and the system's ability to detect and position flying objects.The research in this dissertation focuses on three aspects to optimize volumetric surveillance systems in sky monitoring applications. First, the node placement and coverage should be considered in accordance with the monitoring constraints. Also, the node architecture should be configured to minimize the design cost and maximize the coverage. Last, the system should detect small flying objects with good accuracy.Placing the multi-camera nodes in a hexagonal pattern while allowing overlap between adjacent nodes optimizes the placement. The inclusion of monitoring constraints like monitoring altitude and detection pixel resolution influences the node design. Furthermore, presented results show that modeling the multi-camera nodes as a cylinder rather than a hemisphere minimizes the cost of each node. The design exploration in this thesis provides a method to minimize the node cost based on defined design constraints. It also maximizes the coverage in terms of the number of square meters per dollar. Surveillance systems for sky monitoring should be able to detect and position flying objects. Therefore, two new annotated datasets were introduced that can be used for developing in-flight birds detection methods. The datasets were collected by Mid Sweden University at two locations in Denmark. A YOLOv4-based model for birds detection in 4k grayscale videos captured in wind farms is developed. The model overcomes the problem of detecting small objects in dynamic background, and it improves detection accuracy through tiling and temporal information incorporation, compared to the standard YOLOv4 and background subtraction.
  •  
11.
  • Alqaysi, Hiba, et al. (författare)
  • Cost Optimized Design of Multi-Camera Domefor Volumetric Surveillance
  • 2021
  • Ingår i: IEEE Sensors Journal. - 1530-437X .- 1558-1748. ; 21:3, s. 3730-3737
  • Tidskriftsartikel (refereegranskat)abstract
    • A multi-camera dome consists of number ofcameras arranged in layers to monitor a hemisphere aroundits center. In volumetric surveillance,a 3D space is required tobemonitoredwhich can be achievedby implementing numberof multi-camera domes. A monitoring height is consideredas a constraint to ensure full coverage of the space belowit. Accordingly, the multi-camera dome can be redesignedinto a cylinder such that each of its multiple layers hasdifferent coverage radius. Minimum monitoring constraintsshould be met at all layers. This work is presenting a costoptimized design for the multi-camera dome that maximizesits coverage. The cost per node and number of squaremetersper dollar of multiple configurations are calculated using asearch space of cameras and considering a set of monitoring and coverage constraints. The proposed design is costoptimized per node and provides more coverage as compared to the hemispherical multi-camera dome.
  •  
12.
  • Alqaysi, Hiba, et al. (författare)
  • Design Exploration of Multi-Camera Dome
  • 2019
  • Ingår i: ICDSC 2019 Proceedings of the 13th International Conference on Distributed Smart Cameras. - New York, NY : ACM Digital Library. - 9781450371896
  • Konferensbidrag (refereegranskat)abstract
    • Visual monitoring systems employ distributed smart cameras toeffectively cover a given area satisfying specific objectives. Thechoice of camera sensors and lenses and their deployment affectsdesign cost, accuracy of the monitoring system and the ability toposition objects within the monitored area. Design cost can bereduced by investigating deployment topology such as groupingcameras together to form a dome at a node and optimize it formonitoring constraints. The constraints may include coverage area,number of cameras that can be integrated in a node and pixelresolution at a given distance. This paper presents a method foroptimizing the design cost of multi-camera dome by analyzing tradeoffsbetween monitoring constraints. The proposed method can beused to reduce monitoring cost while fulfilling design objectives.Results show how to increase coverage area for a given cost byrelaxing requirements on design constraints. Multi-camera domescan be used in sky monitoring applications such as monitoring windparks and remote air-traffic control of airports where all-round fieldof view about a point is required to monitor.
  •  
13.
  • Alqaysi, Hiba, et al. (författare)
  • Evaluating Coverage Effectiveness of Multi-Camera Domes Placement for Volumetric Surveillance
  • 2017
  • Ingår i: ICDSC 2017 Proceedings of the 11th International Conference on Distributed Smart Cameras. - New York, NY, USA : Association for Computing Machinery (ACM). - 9781450354875 ; , s. 49-54
  • Konferensbidrag (refereegranskat)abstract
    • Multi-camera dome is composed of a number of cameras arranged to monitor a half sphere of the sky. Designing a network of multi-camera domes can be used to monitor flying activities in open large area, such as birds' activities in wind parks. In this paper, we present a method for evaluating the coverage effectiveness of the multi-camera domes placement in such areas. We used GPS trajectories of free flying birds over an area of 9 km2 to analyze coverage effectiveness of randomly placed domes. The analysis is based on three criteria namely, detection, positioning and the maximum resolution captured. The developed method can be used to evaluate results of designing and optimizing dome placement algorithms for volumetric monitoring systems in order to achieve maximum coverage.
  •  
14.
  • Alqaysi, Hiba, et al. (författare)
  • Full Coverage Optimization for Multi Camera Dome Placement in Volumetric Monitoring
  • 2018
  • Ingår i: ACM International Conference Proceeding Series. - New York, NY, USA : ACM Digital Library. - 9781450365116
  • Konferensbidrag (refereegranskat)abstract
    • Volumetric monitoring can be challenging due to having a 3D target space and moving objects within it. Multi camera dome is proposed to provide a hemispherical coverage of the 3D space around it. This paper introduces a method that optimizes multi camera placement for full coverage in volumetric monitoring system. Camera dome placement is modeled in a volume by adapting the hexagonal packing of circles to provide full coverage at a given height, and 100% detection of flying objects within it. The coverage effectiveness of different placement configurations was assessed using an evaluation environment. The proposed placement is applicable in designing and deploying surveillance systems for remote outdoor areas, such as sky monitoring in wind farms and airport runways in order to record and analyze flying activities.
  •  
15.
  • Bader, Sebastian, et al. (författare)
  • Remote image capturing with low-cost and low-power wireless camera nodes
  • 2014
  • Ingår i: Proceedings of IEEE Sensors. - : IEEE Sensors Council. ; , s. 730-733
  • Konferensbidrag (refereegranskat)abstract
    • Wireless visual sensor networks provide featurerich information about their surrounding and can thus be used as a universal measurement tool for a great number of applications. Existing solutions, however, have mainly been focused on high sample rate applications, such as video surveillance, object detection and tracking. In this paper, we present a wireless camera node architecture that targets low sample rate applications (e.g., manual inspections and meter reading). The major design considerations are a long system lifetime, a small size and a low production cost.We present the overall architecture with its individual design choices, and evaluate the architecture with respect to its application constraints. With a typical image acquisition cost of 1.5 J for medium quality images and a quiescent power demand of only 7 uW, the evaluation results demonstrate that long operation periods of the order of years can be achieved in low sample rate scenarios.
  •  
16.
  • Cheng, Xin, 1974-, et al. (författare)
  • Hardware Centric Machine Vision for High Precision Center of Gravity Calculation
  • 2010
  • Ingår i: PROCEEDINGS OF WORLD ACADEMY OF SCIENCE, ENGINEERING AND TECHNOLOGY. ; 40, s. 576-583
  • Konferensbidrag (refereegranskat)abstract
    • We present a hardware oriented method for real-time measurements of object’s position in video. The targeted application area is light spots used as references for robotic navigation. Different algorithms for dynamic thresholding are explored in combination with component labeling and Center Of Gravity (COG) for highest possible precision versus Signal-to-Noise Ratio (SNR). This method was developed with a low hardware cost in focus having only one convolution operation required for preprocessing of data.
  •  
17.
  • Dreier, Till, et al. (författare)
  • A USB 3.0 readout system for Timepix3 detectors with on-board processing capabilities
  • 2018
  • Ingår i: Journal of Instrumentation. - 1748-0221. ; 13
  • Tidskriftsartikel (refereegranskat)abstract
    • Timepix3 is a high-speed hybrid pixel detector consisting of a 256 x 256 pixel matrix with a maximum data rate of up to 5.12 Gbps (80 MHit/s). The ASIC is equipped with eight data channels that are data driven and zero suppressed making it suitable for particle tracking and spectral imaging.In this paper, we present a USB 3.0-based programmable readout system with online preprocessing capabilities. USB 3.0 is present on all modern computers and can, under real-world conditions, achieve around 320MB/s, which allows up to 40 MHit/s of raw pixel data. With on-line processing, the proposed readout system is capable of achieving higher transfer rate (approaching Timepix4) since only relevant information rather than raw data will be transmitted. The system is based on an Opal Kelly development board with a Spartan 6 FPGA providing a USB 3.0 interface between FPGA and PC via an FX3 chip. It connects to a CERN T imepix 3 chipboard with standard VHDCI connector via a custom designed mezzanine card. The firmware is structured into blocks such as detector interface, USB interface and system control and an interface for data pre-processing. On the PC side, a Qt/C++ multi-platformsoftware library is implemented to control the readout system, providing access to detector functions and handling high-speed USB 3.0 streaming of data from the detector.We demonstrate equalisation, calibration and data acquisition using a Cadmium Telluride sensor and optimise imaging data using simultaneous ToT (Time-over-Threshold) and ToA (Timeof- Arrival) information. The presented readout system is capable of other on-line processing such as analysis and classification of nuclear particles with current or larger FPGAs.
  •  
18.
  • Fedorov, Igor, et al. (författare)
  • Placement Strategy of Multi-Camera Volumetric Surveillance System for Activities Monitoring
  • 2017
  • Ingår i: ICDSC 2017 Proceedings of the 11th International Conference on Distributed Smart Cameras. - New York, NY, USA : Association for Computing Machinery (ACM). - 9781450354875 ; , s. 113-118
  • Konferensbidrag (refereegranskat)abstract
    • The design of multi-camera surveillance system comes with many advantages, for example it facilitates as understanding how flying objects act in a given volume. One possible application is for the observation interaction of birds and calculate their trajectories around wind turbines to create promising systems for preventing bird collisions with turbine blades. However, there are also challenges, such as finding the optimal node placement and camera calibration. To address these challenges we investigated a trade-off between calibration accuracy and node requirements, including resolution, modulation transfer function, field of view and angle baseline. We developed a strategy for camera placement to achieve improved coverage for golden eagle monitoring and tracking. This strategy based on the modified resolution criterion taking into account the contrast function of the camera and the estimation of the base angle between the cameras.
  •  
19.
  • Fedorov, Igor, 1980-, et al. (författare)
  • Towards calibration of outdoor multi-camera visual monitoring system
  • 2018
  • Ingår i: ACM International Conference Proceeding Series. - New York, NY, US : ACM Digital Library. - 9781450365116
  • Konferensbidrag (refereegranskat)abstract
    • This paper proposes a method for calibrating of multi-camera systems where no natural reference points exist in the surrounding environment. Monitoring the air space at wind farms is our test case. The goal is to monitor the trajectories of flying birds to prevent them from colliding with rotor blades. Our camera calibration method is based on the observation of a portable artificial reference marker made out of a pulsed light source and a navigation satellite sensor module. The reference marker can determine and communicate its position in the world coordinate system at centimeter precision using navigartion sensors. Our results showed that simultaneous detection of the same marker in several cameras having overlapping field of views allowed us to determine the markers position in 3D world coordinate space with an accuracy of 3-4 cm. These experiments were made in the volume around a wind turbine at distances from cameras to marker within a range of 70 to 90 m.
  •  
20.
  • Imran, Muhammad, et al. (författare)
  • Analysis and Characterization of Embedded Vision Systems for Taxonomy Formulation
  • 2013
  • Ingår i: Proceedings of SPIE - The International Society for Optical Engineering. - USA : SPIE - International Society for Optical Engineering. - 9780819494290 ; , s. Art. no. 86560J-
  • Konferensbidrag (refereegranskat)abstract
    • The current trend in embedded vision systems is to propose bespoke solutions for specific problems as each application has different requirement and constraints. There is no widely used model or benchmark which aims to facilitate generic solutions in embedded vision systems. Providing such model is a challenging task due to the wide number of use cases, environmental factors, and available technologies. However, common characteristics can be identified to propose an abstract model. Indeed, the majority of vision applications focus on the detection, analysis and recognition of objects. These tasks can be reduced to vision functions which can be used to characterize the vision systems. In this paper, we present the results of a thorough analysis of a large number of different types of vision systems. This analysis led us to the development of a system’s taxonomy, in which a number of vision functions as well as their combination characterize embedded vision systems. To illustrate the use of this taxonomy, we have tested it against a real vision system that detects magnetic particles in a flowing liquid to predict and avoid critical machinery failure. The proposed taxonomy is evaluated by using a quantitative parameter which shows that it covers 95 percent of the investigated vision systems and its flow is ordered for 60 percent systems. This taxonomy will serve as a tool for classification and comparison of systems and will enable the researchers to propose generic and efficient solutions for same class of systems.
  •  
21.
  • Imran, Muhammad, et al. (författare)
  • Architecture Exploration Based on Tasks Partitioning Between Hardware, Software and Locality for a Wireless Vision Sensor Node
  • 2012
  • Ingår i: International Journal of Distributed Systems and Technologies. - IGI Global, USA. : IGI Global. - 1947-3532 .- 1947-3540. ; 3:2, s. 58-71
  • Tidskriftsartikel (refereegranskat)abstract
    • Wireless Vision Sensor Networks (WVSNs) is an emerging field which consists of a number of Visual Sensor Nodes (VSNs). Compared to traditional sensor networks, WVSNs operates on two dimensional data, which requires high bandwidth and high energy consumption. In order to minimize the energy consumption, the focus is on finding energy efficient and programmable architectures for the VSN by partitioning the vision tasks among hardware (FPGA), software (Micro-controller) and locality (sensor node or server). The energy consumption, cost and design time of different processing strategies is analyzed for the implementation of VSN. Moreover, the processing energy and communication energy consumption of VSN is investigated in order to maximize the lifetime. Results show that by introducing a reconfigurable platform such as FPGA with small static power consumption and by transmitting the compressed images after pixel based tasks from the VSN results in longer battery lifetime for the VSN.
  •  
22.
  • Imran, Muhammad, et al. (författare)
  • Architecture of Wireless Visual Sensor Node with Region of Interest Coding
  • 2012
  • Ingår i: Proceedings - 2012 IEEE 3rd International Conference on Networked Embedded Systems for Every Application, NESEA 2012. - : IEEE conference proceedings. - 9781467347235 ; , s. Art. no. 6474029-
  • Konferensbidrag (refereegranskat)abstract
    • The challenges involved in designing a wirelessVision Sensor Node include the reduction in processing andcommunication energy consumption, in order to maximize itslifetime. This work presents an architecture for a wireless VisionSensor Node, which consumes low processing andcommunication energy. The processing energy consumption isreduced by processing lightweight vision tasks on the VSN andby partitioning the vision tasks between the wireless VisionSensor Node and the server. The communication energyconsumption is reduced with Region Of Interest coding togetherwith a suitable bi-level compression scheme. A number ofdifferent processing strategies are investigated to realize awireless Vision Sensor Node with a low energy consumption. Theinvestigation shows that the wireless Vision Sensor Node, usingRegion Of Interest coding and CCITT group4 compressiontechnique, consumes 43 percent lower processing andcommunication energy as compared to the wireless Vision SensorNode implemented without Region Of Interest coding. Theproposed wireless Vision Sensor Node can achieve a lifetime of5.4 years, with a sample period of 5 minutes by using 4 AAbatteries.
  •  
23.
  • Imran, Muhammad, et al. (författare)
  • Complexity Analysis of Vision Functions for Comparison of Wireless Smart Cameras
  • 2014
  • Ingår i: International Journal of Distributed Sensor Networks. - : SAGE Publications. - 1550-1329 .- 1550-1477. ; , s. Art. no. 710685-
  • Tidskriftsartikel (refereegranskat)abstract
    • There are a number of challenges caused by the large amount of data and limited resources such as memory, processing capability, energy consumption, and bandwidth, when implementing vision systems on wireless smart cameras using embedded platforms. It is usual for research in this field to focus on the development of a specific solution for a particular problem. There is a requirement for a tool which facilitates the complexity estimation and comparison of wireless smart camera systems in order to develop efficient generic solutions. To develop such a tool, we have presented, in this paper, a complexity model by using a system taxonomy. In this model, we have investigated the arithmetic complexity and memory requirements of vision functions with the help of system taxonomy. To demonstrate the use of the proposed model, a number of actual systems are analyzed in a case study. The complexity model, together with system taxonomy, is used for the complexity estimation of vision functions and for a comparison of vision systems. After comparison, the systems are evaluated for implementation on a single generic architecture. The proposed approach will assist researchers in benchmarking and will assist in proposing efficient generic solutions for the same class of problems with reduced design and development costs.
  •  
24.
  • Imran, Muhammad, et al. (författare)
  • Complexity Analysis of Vision Functions for implementation of Wireless Smart Cameras using System Taxonomy
  • 2012
  • Ingår i: Proceedings of SPIE - The International Society for Optical Engineering. - Belgium : SPIE - International Society for Optical Engineering. - 9780819491299 ; , s. Art. no. 84370C-
  • Konferensbidrag (refereegranskat)abstract
    • There are a number of challenges caused by the large amount of data and limited resources such as memory, processing capability, energy consumption and bandwidth when implementing vision systems on wireless smart cameras using embedded platforms. It is usual for research in this field to focus on the development of a specific solution for a particular problem. There is a requirement for a tool which has the ability to predict the resource requirements for the development and comparison of vision solutions in wireless smart cameras. To accelerate the development of such tool, we have used a system taxonomy, which shows that the majority of wireless smart cameras have common functions. In this paper, we have investigated the arithmetic complexity and memory requirements of vision functions by using the system taxonomy and proposed an abstract complexity model. To demonstrate the use of this model, we have analysed a number of implemented systems with this model and showed that complexity model together with system taxonomy can be used for comparison and generalization of vision solutions. Moreover, it will assist researchers/designers to predict the resource requirements for different class of vision systems in a reduced time and which will involve little effort. 
  •  
25.
  • Imran, Muhammad, et al. (författare)
  • Demo: SRAM FPGA based Wireless Smart Camera: SENTIOF-CAM
  • 2014
  • Ingår i: Proceedings of the International Conference on Distributed Smart Cameras. - New York, NY, USA : ACM. - 9781450329255
  • Konferensbidrag (refereegranskat)abstract
    • Wireless Sensor Networks applications with huge amount of datarequirements are attracting the utilization of high performanceembedded platforms i.e. Field Programmable Gate Arrays(FPGAs) for in-node sensor processing. However, the designcomplexity, high configuration and static energies of SRAMFPGAs impose challenges for duty cycled applications. In thisdemo, we demonstrate the functionality of SRAM FPGA basedwireless vision sensor node called SENTIOF-CAM. Thedemonstration shows that by using intelligent techniques, a lowenergy and low complexity SRAM FPGA based wireless visionsensor node can be realized for duty cycled applications.
  •  
26.
  • Imran, Muhammad, et al. (författare)
  • Energy Driven Selection and Hardware Implementation of Bi-Level Image Compression
  • 2014
  • Ingår i: Proceedings of the International Conference on Distributed Smart Cameras. - New York, NY, USA : ACM Press. - 9781450329255
  • Konferensbidrag (refereegranskat)abstract
    • Wireless Vision Sensor Nodes are considered to have smaller resources and are expected to have a longer lifetime based on the available limited energy. A wireless Vision Sensor Node (VSN) is often characterized to consume more energy in communication as compared to processing. The communication energy can be reduced by reducing the amount of transmission data with the help of a suitable compression scheme. This work investigates bi-level compression schemes including G4, G3, JBIG2, Rectangular, GZIP, GZIP_Pack and JPEG-LS on a hardware platform. The investigation results show that GZIP_pack, G4 and JBIG2 schemes are suitable for a hardware implemented VSN. JBIG2 offers up to a 43 percent reduction in overall energy consumption as compared to G4 and GZIP_pack for complex images. However, JBIG2 has higher resource requirement and implementation complexity. The difference in overall energy consumption is smaller for smooth images. Depending on the application requirement, the exclusion of a header can reduce the energy consumption by approximately 1 to 33 percent.
  •  
27.
  • Imran, Muhammad, et al. (författare)
  • Energy Efficient SRAM FPGA based Wireless Vision Sensor Node: SENTIOF‐CAM
  • 2014
  • Ingår i: IEEE transactions on circuits and systems for video technology (Print). - 1051-8215 .- 1558-2205. ; 24:12, s. 2132-2143
  • Tidskriftsartikel (refereegranskat)abstract
    • Many Wireless Vision Sensor Networks (WVSNs) applications are characterized to have a low duty cycling. An individual wireless Vision Senor Node (VSN) in WVSN is required to operate with limited resources i.e., processing, memory and wireless bandwidth on available limited energy. For such resource constrained VSN, this paper presents a low complexity, energy efficient and programmable VSN architecture based on a design matrix which includes partitioning of processing load between the node and a server, a low complexity background subtraction, bi-level video coding and duty cycling. The tasks partitioning and proposed background subtraction reduces the processing energy and design complexity for hardware implemented VSN. The bi-level video coding reduces the communication energy whereas the duty cycling conserves energy for lifetime maximization. The proposed VSN, referred to as SENTIOF-CAM, has been implemented on a customized single board, which includes SRAM FPGA, microcontroller, radio transceiver and a FLASH memory. The energy values are measured for different states and results are compared with existing solutions. The comparison shows that the proposed solution can offer up to 69 times energy reduction. The lifetime based on measured energy values shows that for a sample period of 5 minutes, a 3.2 years lifetime can be achieved with a battery of 37.44 kJ energy. In addition to this, the proposed solution offers generic architecture with smaller design complexity on a hardware reconfigurable platform and offers easy adaptation for a number of applications.
  •  
28.
  • Imran, Muhammad, et al. (författare)
  • Exploration of Target Architecture for aWireless Camera Based Sensor Node
  • 2010
  • Ingår i: 28th Norchip Conference, NORCHIP 2010. - : IEEE conference proceedings. - 9781424489732 ; , s. 1-4
  • Konferensbidrag (refereegranskat)abstract
    • The challenges associated with wireless vision sensor networks are low energy consumption, less bandwidth and limited processing capabilities. In order to meet these challenges different approaches are proposed. Research in wireless vision sensor networks has been focused on two different assumptions, first is sending all data to the central base station without local processing, second approach is based on conducting all processing locally at the sensor node and transmitting only the final results. Our research is focused on partitioning the vision processing tasks between Senor node and central base station. In this paper we have added the exploration dimension to perform some of the vision tasks such as image capturing, background subtraction, segmentation and Tiff Group4 compression on FPGA while communication on microcontroller. The remaining vision processing tasks i.e. morphology, labeling, bubble remover and classification are processed on central base station. Our results show that the introduction of FPGA for some of the visual tasks will result in a longer life time for the visual sensor node while the architecture is still programmable.
  •  
29.
  • Imran, Muhammad, et al. (författare)
  • Implementation of wireless Vision Sensor Node for Characterization of Particles in Fluids
  • 2012
  • Ingår i: IEEE transactions on circuits and systems for video technology (Print). - 1051-8215 .- 1558-2205. ; 22:11, s. 1634-1643
  • Tidskriftsartikel (refereegranskat)abstract
    • Wireless Vision Sensor Networks (WVSNs) have a number of wireless Vision Sensor Nodes (VSNs), often spread over a large geographical area. Each node has an image capturing unit, a battery or alternative energy source, a memory unit, a light source, a wireless link and a processing unit. The challenges associated with WVSNs include low energy consumption, low bandwidth, limited memory and processing capabilities. In order to meet these challenges, our research is focused on the exploration of energy efficient reconfigurable architectures for VSN. In this work, the design/research challenges associated with the implementation of VSN on different computational platforms such as micro-controller, FPGA and server, are explored. In relation to this, the effect on the energy consumption and the design complexity at the node, when the functionality is moved from one platform to another are analyzed. Based on the implementation of the VSN on embedded platforms, the lifetime of the VSN is predicted using the measured energy values of the platforms for different implementation strategies. The implementation results show that an architecture, where the compressed images after pixel based operation are transmitted, realize a WVSN system with low energy consumption. Moreover, the complex post processing tasks are moved to a server, with reduced constraints. 
  •  
30.
  • Imran, Muhammad, et al. (författare)
  • Implementation of Wireless Vision Sensor Node With a Lightweight Bi-Level Video Coding
  • 2013
  • Ingår i: IEEE Journal on Emerging and Selected Topics in Circuits and Systems. - : IEEE Press. - 2156-3357 .- 2156-3365. ; 3:2, s. 198-209
  • Tidskriftsartikel (refereegranskat)abstract
    • Wireless vision sensor networks (WVSNs) consist ofa number of wireless vision sensor nodes (VSNs) which have limitedresources i.e., energy, memory, processing, and wireless bandwidth.The processing and communication energy requirements ofindividual VSN have been a challenge because of limited energyavailability. To meet this challenge, we have proposed and implementeda programmable and energy efficient VSN architecturewhich has lower energy requirements and has a reduced designcomplexity. In the proposed system, vision tasks are partitionedbetween the hardware implemented VSN and a server. The initialdata dominated tasks are implemented on the VSN while thecontrol dominated complex tasks are processed on a server. Thisstrategy will reduce both the processing energy consumption andthe design complexity. The communication energy consumption isreduced by implementing a lightweight bi-level video coding on theVSN. The energy consumption is measured on real hardware fordifferent applications and proposed VSN is compared against publishedsystems. The results show that, depending on the application,the energy consumption can be reduced by a factor of approximately1.5 up to 376 as compared to VSN without the bi-level videocoding. The proposed VSN offers energy efficient, generic architecturewith smaller design complexity on hardware reconfigurableplatform and offers easy adaptation for a number of applicationsas compared to published systems.
  •  
31.
  • Imran, Muhammad (författare)
  • Investigation of Architectures for Wireless Visual Sensor Nodes
  • 2011
  • Licentiatavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Wireless visual sensor network is an emerging field which has proveduseful in many applications, including industrial control and monitoring,surveillance, environmental monitoring, personal care and the virtual world.Traditional imaging systems used a wired link, centralized network, highprocessing capabilities, unlimited storage and power source. In manyapplications, the wired solution results in high installation and maintenancecosts. However, a wireless solution is the preferred choice as it offers lessmaintenance, infrastructure costs and greater scalability.The technological developments in image sensors, wirelesscommunication and processing platforms have paved the way for smartcamera networks usually referred to as Wireless Visual Sensor Networks(WVSNs). WVSNs consist of a number of Visual Sensor Nodes (VSNs)deployed over a large geographical area. The smart cameras can performcomplex vision tasks using limited resources such as batteries or alternativeenergy sources, embedded platforms, a wireless link and a small memory.Current research in WVSNs is focused on reducing the energyconsumption of the node so as to maximise the life of the VSN. To meet thischallenge, different software and hardware solutions are presented in theliterature for the implementation of VSNs.The focus in this thesis is on the exploration of energy efficientreconfigurable architectures for VSNs by partitioning vision tasks on software,hardware platforms and locality. For any application, some of the vision taskscan be performed on the sensor node after which data is sent over the wirelesslink to the server where the remaining vision tasks are performed. Similarly,at the VSN, vision tasks can be partitioned on software and the hardwareplatforms.In the thesis, all possible strategies are explored, by partitioning visiontasks on the sensor node and on the server. The energy consumption of thesensor node is evaluated for different strategies on software platform. It isobserved that performing some of the vision tasks on the sensor node andsending compressed images to the server where the remaining vision tasks areperformed, will have lower energy consumption.In order to achieve better performance and low power consumption,Field Programmable Gate Arrays (FPGAs) are introduced for theimplementation of the sensor node. The strategies with reasonable designtimes and costs are implemented on hardware-software platform. Based onthe implementation of the VSN on the FPGA together with micro-controller,the lifetime of the VSN is predicted using the measured energy values of theplatforms for different processing strategies. The implementation resultsprove our analysis that a VSN with such characteristics will result in a longerlife time.
  •  
32.
  • Imran, Muhammad, et al. (författare)
  • Low Complexity Background Subtraction for Wireless Vision Sensor Node
  • 2013
  • Ingår i: Proceedings - 16th Euromicro Conference on Digital System Design, DSD 2013. - 9780769550749 ; , s. 681-688
  • Konferensbidrag (refereegranskat)abstract
    • Wireless vision sensor nodes consist of limited resources such as energy, memory, wireless bandwidth and processing. Thus it becomes necessary to investigate lightweight vision tasks. To highlight the foreground objects, many machine vision applications depend on the background subtraction technique. Traditional background subtraction approaches employ recursive and non-recursive techniques and store the whole image in memory. This raises issues like complexity on hardware platform, energy requirements and latency. This work presents a low complexity background subtraction technique for a hardware implemented VSN. The proposed technique utilizes existing image scaling techniques for scaling down the image. The downscaled image is stored in memory of microcontroller which is already there for transmission. For subtraction operation, the background pixels are generated in real time through up scaling. The performance, and memory requirements of the system is compared for four image scaling techniques including nearest neighbor, averaging, bilinear, and bicubic. The results show that a system with lightweight scaling techniques, i.e., nearest neighbor and averaging, up to a scaling factor of 8, missed on average less than one object as compared to a system which uses a full original background image. The proposed approach will reduce the cost, design/implementation complexity and the memory requirement by a factor of up to 64.
  •  
33.
  • Imran, Muhammad, et al. (författare)
  • Pre-processing Architecture for IR-Visual Smart Camera Based on Post-Processing Constraints
  • 2016
  • Konferensbidrag (refereegranskat)abstract
    • In embedded vision systems, the efficiency of pre-processing architectures have a ripple effect on post-processing functions such as feature extraction, classification and recognition. In this work, we investigated a pre-processing architecture for smart camera system, integrating a thermal and vision sensors, by considering the constraints of post-processing. By utilizing the locality feature of the system, we performed pre-processing on the camera node by using FPGA and post-processing on the client device by using the microprocessor platform, NVIDIA Tegra. The study shows that for outdoor people surveillance applications with complex background and varying lighting conditions, the pre-processing architecture, which transmits thermal binary Region-of-Interest (ROI) images, offers better classification accuracy and smaller complexity as compared to alternative approaches.
  •  
34.
  • Khursheed, Khursheed, 1983-, et al. (författare)
  • Exploration of Local and Central Processing for a Wireless Camera Based Sensor Node
  • 2010
  • Ingår i: International Conference on Signals and Electronic Systems, ICSES'10 - Conference Proceeding 2010, Article number 5595231. - : IEEE conference proceedings. - 9788390474342 - 9781424453078 ; , s. 147-150
  • Konferensbidrag (refereegranskat)abstract
    • Wireless vision sensor network is an emerging field which combines image sensor, on board computation and communication links. Compared to the traditional wireless sensor networks which operate on one dimensional data, wireless vision sensor networks operate on two dimensional data which requires both higher processing power and communication bandwidth. The research focus within the field of wireless vision sensor network has been based on two different assumptions involving either sending data to the central base station without local processing or conducting all processing locally at the sensor node and transmitting only the final results. In this paper we focus on determining an optimal point for intelligence partitioning between the sensor node and the central base station and by exploring compression methods. The lifetime of the visual sensor node is predicted by evaluating the energy consumption for different levels of intelligence partitioning at the sensor node. Our results show that sending compressed images after segmentation will result in a longer life for the sensor node.
  •  
35.
  • Khursheed, Khursheed, et al. (författare)
  • Exploration of tasks partitioning between hardware software and locality for a wireless camera based vision sensor node
  • 2011
  • Ingår i: Proceedings - 6th International Symposium on Parallel Computing in Electrical Engineering, PARELEC 2011. - : IEEE conference proceedings. - 9780769543970 ; , s. 127-132
  • Konferensbidrag (refereegranskat)abstract
    • In this paper we have explored different possibilities for partitioning the tasks between hardware, software and locality for the implementation of the vision sensor node, used in wireless vision sensor network. Wireless vision sensor network is an emerging field which combines image sensor, on board computation and communication links. Compared to the traditional wireless sensor networks which operate on one dimensional data, wireless vision sensor networks operate on two dimensional data which requires higher processing power and communication bandwidth. The research focus within the field of wireless vision sensor networks have been on two different assumptions involving either sending raw data to the central base station without local processing or conducting all processing locally at the sensor node and transmitting only the final results. Our research work focus on determining an optimal point of hardware/software partitioning as well as partitioning between local and central processing, based on minimum energy consumption for vision processing operation. The lifetime of the vision sensor node is predicted by evaluating the energy requirement of the embedded platform with a combination of FPGA and micro controller for the implementation of the vision sensor node. Our results show that sending compressed images after pixel based tasks will result in a longer battery life time with reasonable hardware cost for the vision sensor node. © 2011 IEEE.
  •  
36.
  • Lawal, Najeem, et al. (författare)
  • Address Generation for FPGA RAMs for Efficient Implementation of Real-Time Video Processing Systems
  • 2005
  • Ingår i: Proceedings - 2005 International Conference on Field Programmable Logic and Applications, FPL. - : IEEE conference proceedings. - 0780393627 ; , s. 136-141
  • Konferensbidrag (refereegranskat)abstract
    • FPGA offers the potential of being a reliable, and high-performance reconfigurable platform for the implementation of real-time video processing systems. To utilize the full processing power of FPGA for video processing applications, optimization of memory accesses and the implementation of memory architecture are important issues. This paper presents two approaches, base pointer approach and distributed pointer approach, to implement accesses to on-chip FPGA Block RAMs. A comparison of the experimental results obtained using the two approaches on realistic image processing systems design cases is presented. The results show that compared to the base pointer approach the distributed pointer approach increases the potential processing power of FPGA, as a reconfigurable platform for video processing systems.
  •  
37.
  • Lawal, Najeem, et al. (författare)
  • Architecture driven memory allocation for FPGA Based Real-Time Video Processing Systems
  • 2011
  • Ingår i: Proceedings of the 2011 7th Southern Conference on Programmable Logic, SPL 2011 2011, Article number 5782639. - : IEEE conference proceedings. - 9781424488483 ; , s. 143-148
  • Konferensbidrag (refereegranskat)abstract
    • In this paper, we present an approach that uses information about the FPGA architecture to achieve optimized allocation of embedded memory in real-time video processing system. A cost function defined in terms of required memory sizes, available block- and distributed-RAM resources is used to motivate the allocation decision. This work is a high-level exploration that generates VHDL RTL modules and synthesis constraint files to specify memory allocation. Results show that the proposed approach achieves appreciable reduction in block RAM usage over previous logic to memory mapping approach at negligible increase in logic usage
  •  
38.
  • Lawal, Najeem, et al. (författare)
  • C++ based System Synthesis of Real-Time Video Processing Systems targeting FPGA Implementation
  • 2007
  • Ingår i: Proceedings - 21st International Parallel and Distributed Processing Symposium, IPDPS 2007; Abstracts and CD-ROM. - Long Beach, CA : IEEE conference proceedings. - 1424409101 - 9781424409105 ; , s. 1-7
  • Konferensbidrag (refereegranskat)abstract
    • Implementing real-time video processing systems put high requirements on computation and memory performance. FPGAs have proven to be effective implementation architecture for these systems. However, the hardware based design flow for FPGAs make the implementation task complex. The system synthesis tool presented in this paper reduces this design complexity. The synthesis is done from a SystemC based coarse grain dataflow graph that captures the video processing system. The data flow graph is optimized and mapped onto an FPGA. The results from real-life video processing systems clearly show that the presented tool produces effective implementations.
  •  
39.
  • Lawal, Najeem, et al. (författare)
  • C++ based System Synthesis of Real-Time Video Processing Systems targeting FPGA Implementation
  • 2006
  • Ingår i: Proceedings of the FPGA World Conference 2006.
  • Konferensbidrag (refereegranskat)abstract
    • Implementing real-time video processing systems put high requirements on computation and memory performance. FPGAs have shown to be an effective implementation architecture for these systems. However, the hardware based design flow for FPGAs make the implementation task complex. The system synthesis tool presented in this paper reduces this design complexity. The synthesis is done from a SystemC based coarse grain data flow graph that captures the video processing system. The data flow graph is optimized and mapped onto an FPGA. The results from real-life video processing systems clearly show that the presented tool produces effective implementations.
  •  
40.
  •  
41.
  • Lawal, Najeem, et al. (författare)
  • Design exploration of a multi-camera dome for sky monitoring
  • 2016
  • Ingår i: ACM International Conference Proceeding Series. - New York, NY, USA : Association for Computing Machinery (ACM). - 9781450347860 ; , s. 14-18
  • Konferensbidrag (refereegranskat)abstract
    • Sky monitoring has many applications but also many challenges to be addressed before it can be realized. Some of the challenges are cost, energy consumption and complex deployment. One way to address these challenges is to compose a camera dome by grouping cameras that monitor a half sphere of the sky. In this paper, we present a model for design exploration that investigates how characteristics of camera chips and objective lenses affect the overall cost of a node of a camera dome. The investigation showed that by accepting more cameras in a single node can result in a reduced total cost of the system. This concludes that by using suitable design and camera placement technique, a cost-effective solution can be proposed for massive open-area i.e. sky monitoring.
  •  
42.
  • Lawal, Najeem, et al. (författare)
  • Embedded FPGA memory requirements for real-time video processing applications
  • 2005
  • Ingår i: 23rd NORCHIP Conference 2005. - : IEEE conference proceedings. - 1424400643 ; , s. 206-209
  • Konferensbidrag (refereegranskat)abstract
    • FPGAs show interesting properties for real-time implementation of video processing systems. An important feature is the available on-chip RAM blocks embedded on the FPGAs. This paper presents an analysis of the current and future requirements of video processing systems put on these embedded memory resources. The analysis is performed such that a set of video processing systems are allocated onto different existing and extrapolated FPGA architectures. The analysis shows that FPGAs should support multiple memory sizes to take full advantage of the architecture. These results are valuable for both designers of systems and for planning the development of new FPGA architectures
  •  
43.
  • Lawal, Najeem (författare)
  • Global Block RAM Allocation Algorithm for FPGA implementation of Real-Time Video Processing Systems
  • 2004
  • Rapport (övrigt vetenskapligt/konstnärligt)abstract
    • In this master thesis an algorithm for the allocation of on-chip FPGA Block RAMs for the implementation of a Real-Time Video Processing Systems is presented. The effectiveness of the algorithm is shown through the implementation of realistic image processing systems. The algorithm, which is based on a heuristic, seeks the most cost effective way of allocating memory objects to the FPGA Block RAMs. The experimental results obtained show that this algorithm generates results which are close to the theoretical optimum for most design cases.
  •  
44.
  • Lawal, Najeem, 1974- (författare)
  • Memory Synthesis for FPGA Implementation of Real-Time Video Processing Systems
  • 2009
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • In this thesis, both a method and a tool to enable efficient memory synthesis for real-time video processing systems on field programmable logic array are presented. In real-time video processing system (RTVPS), a set of operations are repetitively performed on every image frame in a video stream. These operations are usually computationally intensive and, depending on the video resolution, can also be very data transfer dominated. These operations, which often require data from several consecutive frames and many rows of data within each frame, must be performed accurately and under real-time constraints as the results greatly affect the accuracy of application. Application domains of these systems include machine vision, object recognition and tracking, visual enhancement and surveillance. Developments in field programmable gate arrays (FPGAs) have been the motivation for choosing them as the platform for implementing RTVPS. Essential logic resources required in RTVPS operations are currently available and are optimized and embedded in modern FPGAs. One such resource is the embedded memory used for data buffering during real-time video processing. Each data buffer corresponds to a row of pixels in a video frame, which is allocated using a synthesis tool that performs the mapping of buffers to embedded memories. This approach has been investigated and proven to be inefficient. An efficient alternative employing resource sharing and allocation width pipelining will be discussed in this thesis. A method for the optimised use of these embedded memories and, additionally, a tool supporting automatic generation of hardware descriptions language (HDL) modules for the synthesis of the memories according to the developed method are the main focus of this thesis. This method consists of the memory architecture, allocation and addressing. The central objective of this method is the optimised use of embedded memories in the process of buffering data on-chip for an RVTPS operation. The developed software tool is an environment for generating HDL codes implementing the memory sub-components. The tool integrates with the Interface and Memory Modelling (IMEM) tools in such a way that the IMEM’s output - the memory requirements of a RTVPS - is imported and processed in order to generate the HDL codes. IMEM is based on the philosophy that the memory requirements of an RTVPS can be modelled and synthesized separately from the development of the core RTVPS algorithm thus freeing the designer to focus on the development of the algorithm while relying on IMEM for the implementation of memory sub-components.
  •  
45.
  • Lawal, Najeem, 1974- (författare)
  • Memory Synthesis for FPGA Implementation of Real-Time Video Processing Systems
  • 2006
  • Licentiatavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • In this thesis, both a method and a tool to enable efficient memory synthesis for real-time video processing systems on field programmable logic array are presented. In real-time video processing system (RTVPS), a set of operations are repetitively performed on every image frame in a video stream. These operations are usually computationally intensive and, depending on the video resolution, can also be very data transfer dominated. These operations, which often require data from several consecutive frames and many rows of data within each frame, must be performed accurately and under real-time constraints as the results greatly affect the accuracy of application. Application domains of these systems include object recognition, object tracking and surveillance. Developments in field programmable gate array (FPGA) have been the motivation for choosing them as the platform for implementing RTVPS. Essential logic resources required in RTVPS operation are currently available optimized and embedded in modern FPGAs. One such resource is the embedded memory used for data buffering during real-time video processing. Each data buffer corresponds to a row of pixels in a video frame, which is allocated using a synthesis tool that performs the mapping of buffers to embedded memories. This approach has been investigated and proven to be inefficient. An efficient alternative employing resource sharing and allocation width pipelining will be discussed in this thesis. A method for the optimal use of these embedded memories and, additionally, a tool supporting automatic generation of hardware descriptions language (HDL) codes for the synthesis of the memories according to the developed method are the main focus of this thesis. This method consists of the memory architecture, allocation and addressing. The central objective of this method is the optimal use of embedded memories in the process of buffering data on-chip for an RVTPS operation. The developed software tool is an environment for generating HDL codes implementing the memory sub-components. The tool integrates with the Interface and Memory Modelling (IMEM) tools in such a way that the IMEM’s output - the memory requirements of a RTVPS - is imported and processed in order to generate the HDL codes. IMEM is based on the philosophy that the memory requirements of an RTVPS can be modelled and synthesized separately from the development of the core RTVPS algorithm thus freeing the designer to focus on the development of the algorithm while relying on IMEM for the implementation of memory sub-components.
  •  
46.
  • Lawal, Najeem, et al. (författare)
  • Power-aware automatic constraint generation for FPGA based real-time video processing systems
  • 2007
  • Ingår i: 25th Norchip Conference, NORCHIP. - New York : IEEE conference proceedings. - 9781424415168 ; , s. 124-128
  • Konferensbidrag (refereegranskat)abstract
    • The introduction of embedded DSP blocks and embedded memory has made FPGAs an attractive architecture for implementation of real-time video processing systems. The big bottle neck of the FPGA compared to other programmable architectures is the complex programming model. This paper presents an automatic generation of placement and routing constraints for FPGA implementation of real-time video processing systems as one step to automate the programming model. The constraint generator targets lower power consumption, better resource utilization and reduced development time. Results show that a 28 % reduction in dynamic power can be achieved using the proposed approach over traditional logic to memory mapping.
  •  
47.
  • Lawal, Najeem, et al. (författare)
  • Power Consumption Measurement & Configuration Time of FPGA
  • 2015
  • Ingår i: 2015 POWER GENERATION SYSTEMS AND RENEWABLE ENERGY TECHNOLOGIES (PGSRET-2015). - 9781467368131 - 9781467368124 ; , s. 63-67
  • Konferensbidrag (refereegranskat)abstract
    • In this paper, we presents results concerning power consumption and configuration time for FPGA. FPGAs re-programmability, flexibility and re-configurability give rise to number of possibilities like adding more and more features, increasing lifetime duration to embedded systems. Power consumption of the peripheral devices is also meaningfully affects by Time behavior. Estimation based on average activity may not being useful for accurate power estimation of system. The configuration time of FPGA depend on configuration data width, size file, clock frequency and flash time access. We measured the total power consumption on each voltage supply and the total configuration time of Spartan-6 FPGA Atlys board using LabVIEW. Comparison had been made between estimated power value and measured power value. Hence, we believe that our experiment results will be useful to other FPGA-based embedded systems.
  •  
48.
  • Lawal, Najeem, et al. (författare)
  • Ram allocation algorithm for video processing applications on FPGA
  • 2006
  • Ingår i: Journal of Circuits, Systems and Computers. - 0218-1266. ; 15:5, s. 679-699
  • Tidskriftsartikel (refereegranskat)abstract
    • This paper presents an algorithm for the allocation of on-chip FPGA Block RAMs for the implementation of Real-Time Video Processing Systems. The effectiveness of the algorithm is shown through the implementation of realistic image processing systems. The algorithm, which is based on a heuristic, seeks the most cost-effective way of allocating memory objects to the FPGA Block RAMs. The experimental results obtained, show that this algorithm generates results which are close to the theoretical optimum for most design cases.
  •  
49.
  • Malik, Abdul Waheed, 1981-, et al. (författare)
  • Hardware Architecture for Real-time  Computation of Image Component Feature Descriptors on a FPGA
  • 2014
  • Ingår i: International Journal of Distributed Sensor Networks. - : SAGE Publications. - 1550-1329 .- 1550-1477. ; , s. Art. no. 815378-
  • Tidskriftsartikel (refereegranskat)abstract
    • This paper describes a hardwarearchitecture for real-time image component labelingand the computation of image component featuredescriptors. These descriptors are object relatedproperties used to describe each image component.Embedded machine vision systems demand a robustperformance, power efficiency as well as minimumarea utilization, depending on the deployedapplication. In the proposed architecture, the hardwaremodules for component labeling and featurecalculation run in parallel. A CMOS image sensor(MT9V032), operating at a maximum clock frequencyof 27MHz, was used to capture the images. Thearchitecture was synthesized and implemented on aXilinx Spartan-6 FPGA. The developed architecture iscapable of processing 390 video frames per second ofsize 640x480 pixels. Dynamic power consumption is13mW at 86 frames per second.
  •  
50.
  • Malik, Abdul Waheed, 1981-, et al. (författare)
  • Real-time Component Labelling with Centre of Gravity Calculation on FPGA
  • 2011
  • Ingår i: 2011 Proceedings of Sixth International Conference on Systems.
  • Konferensbidrag (refereegranskat)abstract
    • In this paper we present a hardware unit for real time component labelling with Centre of Gravity (COG) calculation. The main targeted application area is light spots used as references for robotic navigation. COG calculation can be done in parallel with a single pass component labelling unit without first having to resolve merged labels. We present hardware architecture suitable for implementation of this COG unit on Field programmable Gate Arrays (FPGA). As result, we get high frame speed, low power and low latency. The device utilization and estimated power dissipation are reported for Xilinx Virtex II pro device simulated at 86 VGA sized frames per second. Maximum speed is 410 frames per second at 126 MHz clock.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-50 av 59
Typ av publikation
konferensbidrag (37)
tidskriftsartikel (12)
doktorsavhandling (3)
licentiatavhandling (3)
rapport (2)
annan publikation (2)
visa fler...
visa färre...
Typ av innehåll
refereegranskat (49)
övrigt vetenskapligt/konstnärligt (10)
Författare/redaktör
Lawal, Najeem (44)
O'Nils, Mattias (29)
Imran, Muhammad (24)
Ahmad, Naeem (17)
Khursheed, Khursheed (15)
Thörnberg, Benny (11)
visa fler...
Thörnberg, Benny, 19 ... (10)
Lawal, Najeem, 1973- (10)
O'Nils, Mattias, 196 ... (9)
Alqaysi, Hiba (7)
Fedorov, Igor (5)
Malik, Abdul Waheed (4)
Oelmann, Bengt (3)
Cheng, Xin, 1974- (3)
O'Nils, Mattias, Pro ... (3)
Oelmann, Bengt, Prof ... (3)
Norell, Håkan (3)
O’ Nils, Mattias (3)
Lawal, Najeem, Dr (2)
Waheed, Malik A. (2)
Lawal, Najeem, 1974- (2)
O'Nils, Mattias, Pro ... (2)
Malik, Abdul Waheed, ... (2)
Fröjdh, Christer, 19 ... (1)
Abdul Waheed, Malik, ... (1)
Kjeldsberg, Per Gunn ... (1)
Usman, Muhammad (1)
Lawal, Najeem, Docto ... (1)
Qureshi, Faisal, Pro ... (1)
Poiesi, Fabio (1)
Oelmann, Bengt, Prof (1)
Imran, Muhammad, 198 ... (1)
Bader, Sebastian (1)
Krämer, Matthias (1)
Wang, Xu (1)
Eles, Petru, Prof. (1)
Shahzad, Khurram (1)
Shallari, Irida (1)
Malik, Waheed, 1981- (1)
Dreier, Till (1)
Krapohl, David, 1980 ... (1)
Maneuski, Dzimitry (1)
Schöwerling, Jan Oli ... (1)
O'Shea, Val (1)
Fedorov, Igor, 1980- (1)
Benkrid, Khaled (1)
Thörnberg, Benny, Dr (1)
Khursheed, Khursheed ... (1)
Benkrid, Khaled, Dr. (1)
Lateef, Fahad (1)
visa färre...
Lärosäte
Mittuniversitetet (59)
Språk
Engelska (59)
Forskningsämne (UKÄ/SCB)
Teknik (46)
Naturvetenskap (2)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy