Dante Johnson, Brian Nguyen, General Roberts, and Taylor Powell
Photovoltaic units, also known as solar cells have an ideal operating temperature of about 25 C or 77 F, for each degree Celsius above the optimal operating temperature, we can expect the efficiency of the unit to drop 0.5%. On any given summer day, it is not uncommon for solar cell temperatures to reach upwards of 70 C which is about 158 F, this results in a drop of 25% efficiency. If we think of efficiency as the quantity energy produced divided by energy supplied, we can quantitatively see how loss of efficiency corresponds to a loss in energy produced. Our goal is to make up some of this loss energy through thermoelectric generation using the increased heat from the solar cells to create a current that we can add back into the total power produced. Successful implementation of such as system could potentially decrease the dependency of fossil fuels.
Lance Johnson, Sai Katikala, and Maaz Nawaz
More than 500,000 open-heart surgeries are performed in the United States every year. The anticoagulant Heparin is used to decrease the likelihood of thrombosis or hemorrhaging in each surgery by bonding to the enzyme inhibitor antithrombin III (AT-III). However, anesthesiologists currently lack the ability to measure antithrombin levels in a patient quickly, making appropriate Heparin dosages difficult to determine and possibly resulting in thrombosis or hemorrhaging if thrombin levels move outside the allowable range. This could be prevented with a simple bedside test. Current tests use gold, but we believe Iron (III) Oxide (commonly known as rust) can be used at a much lower price. Given a thrombin molecule with a fluorescein and quencher, the process to design and synthesize a test particle from Iron (III) Oxide coated in an aminosilane to detect AT-III levels was investigated.
Michael Johnson, Dakota MacGill, Robert Peake, and Ryan Wydler
Building an R/C aircraft isn’t exactly a project intent on breaking ground in the field of aerospace engineering, rather it is an opportunity to apply fluid dynamics in a practical manner to gain experience in aeronautical engineering. An aerospace engineer is one who designs and builds aircraft and spacecraft and are often tasked with deeming an aircraft flight worthy or not. We will attempt to predict the feasibility of our custom plane designs using aircraft engineering techniques found through research and through texts provided by our advisor, Dr. Jayasimha. The goal of this design is to design and fabricate a plane that abides by the SAE 2012 Aircraft Competition guidelines. These constraints require the plane to lift off in under 200 ft of runway, not have rotary wings (such as helicopter), weigh less than 55 lbs, and the propeller must rotate at the same RPM as the motor. The problem we are trying to solve is to accurately predict the behavior of the model prior to fabrication by using aerodynamic engineering calculations and simplifications. We aim to predict the take-off velocity, the induced drags on the plane, required engine performance, in-flight performance, and overall feasibility of the design. If successful, we will gain a decent understanding of how aerospace engineers predict the flight behavior and specification requirements of airplanes. It is actually more difficult to design a flightworthy small-scale aircraft rather than a large commercial one due to the low altitude of operation, which induces fairly laminar boundary conditions, which in turn increases the drag force on the plane. In order to do this, coefficients of lift and drag of the airplane’s airfoil will be found using the software XFLYER 5, which is a specialized program for aircraft design. Using this software and formulae found from R/C aircraft related texts, an excel sheet will be made to predict flight performance and by altering the dimensions of the plane and airfoil shape, we will be able to choose the most flightworthy design. So far we are working with an airfoil at an angle of attack of 7 degrees which yields a coefficient of lift of 0.943 and a coefficient of drag of 0.0062. With rectangular wings 13’’X 42’’, a fuselage of length 5.5 ft, a Rimfire 1.60 motor, and a weight of 25 lbs, it was predicted that the plane will need to reach 37mph, overcome a drag of 14.5 Newtons, and will take about 77 ft of runway to lift off. We aim to reduce these values by adjusting parameters accordingly.
Jason Kruse, Osman Sesay, and Stephanie Goggin-Burns
Design a computer network using two or more host servers that supports a virtual hosting environment.
The importance of creating a successful datacenter is to allow a company to maintain data accessibility while keeping the data secure. One way of doing this is to have servers at multiple locations to insure that data isn’t lost in case of a failure at one server location. The impact of a datacenter that supports a virtual hosting environment is to maximize hardware recourses, grants the ability to quickly stand up server services and increases availability during hardware failure due to software like VMotion.
We start the project by deciding the type of server that best fits the needs of the project. The next step is figuring out which RAID configuration best fits the needs of the datacenter. Once the RAID configuration is decided the user needs to figure out how much storage is going to be required for the datacenter to do its job. Depending on the RAID configuration each hard drive will only be able to access a certain amount of it capacity. Once the host server is set up the administrator can create virtual hosts, vCPUs and virtual machines to implement different required server functions. We will use different VMware products like VMware workstation and VMotion to maximize the usage of the physical hardware.
The goal of the project is to create a datacenter with multiple servers and allow for remote access. The datacenter must be able to implement new services and manage its resources to provide maximum functionality. The datacenter must also be able to support warm and hot backups in case of a hardware failure.
Anticipated Results and Conclusions: We expect to have a working datacenter with multiple hosting servers. We will utilize virtual machines and physical servers to create a computing environment. With the computing environment we will be able to test different applications on different platforms on a single device.
Kelsey Mangham, Nicholas Montesdeoca, and Michael "Ngyuon" Lay
There has been a recent increase in the amount of oil and natural gas exploration due to fuel reserves that were once inaccessible in North America becoming available through methods such as hydraulic fracturing and horizontal drilling. However, these methods of exploration generate substantial amounts of oil contaminated waste which pose significant pollution risks of which current methods of treatment are slow and ineffective. This project aims to design a better operating process for use with Anaerobic Thermal Desorption Units (ATDU) that are currently manufactured by RLC Technologies which offer more environmentally safe and quick processing of such wastes.
Current methods used to treat oil contaminated waste include land farming and deep well injection. These methods in essence leave the oils on or in the ground that has raised concern regarding land and groundwater pollution. These methods are practiced due their lack of regulation and little cost investment. RLC provides a waste treatment service that is currently more cost efficient than the rest of the current market for thermal desorption plants. Through success of this project, a more cost effective method can be offered by RLC Technologies that can be competitive with the current regulation light methods in practice.
The focus of the project is on aiding RLC in the utilization of their ATDU for their plant operations. In order to do so, the team will study the effects of drum rotation speed, temperature, and waste composition on waste materials from drill cuttings and (remove!) synthetically created waste on a bench and pilot scale sized batch ATDU. The data collected will be used for determining the thermal efficiency by analyzing how much of the base oils are collected. The desired outcomes are to improve the oil recovery of the current ADTU capabilities by at least ten percent while reducing costs associated with it’s operation. There will be efforts to increase the amounts of the diesel ranged oils in the recovered product through controlling the thermal cracking in the ADTU.
Brittany Martinez, Brittany Noah, and David Decker
The objective of this project is to create a non-invasive hypoglycemic alert system that will detect a drop in blood sugar in type 1 diabetics during sleep. This will be achieved by creating an algorithm that couples heart rate variability with skin conductance to increase the accuracy of hypoglycemia detection. The device will be housed in a torso strap that will include: electrodes located over the user’s rib cage, a skin conductance sensor placed in the user’s armpit, a microcontroller to collect and process the data, and vibrating motors that will awaken the patient if hypoglycemia is detected. Integrating the ECG leads into the torso strap incorporates a capacitive circuit that reduces reverberation due to lead placement over the rib cage while also increasing user safety and accuracy of R-wave detection. This is in contrast to the standard bipolar three ECG lead arrangement. This technique was discovered after realizing the need for a more ergonomic design to allow for full range of motion for the user. Skin conductance will be measured through a sensor made of conductive fabric that will be placed in the patient’s armpit due to the high concentration of sweat glands while maintaining the ergonomics of the design.
A LillyPad microcontroller will be programmed to collect and process the signals using Arduino Software and will include a SD Card for storage. The ECG signal will be amplified, filtered, and the R-wave will be detected. A timer within the system will determine the intervals of the R-waves which will create a plot of time versus the index number. This data will be saved on the SD Card. Welch’s Method of averaging Discrete Fourier Transforms (DFT) will determine the power of the low frequency band of the signal in order to compute spectral components. Previous research has shown that the power of the low frequency range (0.04-0.15Hz) of the ECG is related to hypoglycemia. Skin conductance will be measured using a low level constant current which will measure a change in conductivity of the skin via the conductive fabric located in the armpit that will be attached to the torso strap. If skin conductance increases along with a decrease in the power of the low frequency component of the ECG signal, the diabetic will be alerted via a vibrating motor in the torso strap.
A finalized material and budget list as well as a finalized conceptual model were created for the Sternheimer Grant application. Materials for fabrication are in the process of being ordered and will be ready to begin prototyping in mid-January.
James McNamee, Zachary Gartrell, and Andrew Krupacs
The energy demands of society are increasing, and the ability to produce this energy from renewable sources such as wind must also increase to meet these demands. Large wind turbines are a great way to harvest renewable wind power, but they are often too large to use in an urban environment.
This project focuses on designing wind energy harvesters, based on existing iterations that produce output power per area comparable to that of an efficient wind turbine, approximately 3 watts per harvester that is 1 meter long. In order to reach this target power level, two innovative designs are under investigation. Each design involves two elastic belts tightly stretched in parallel with one another, and suspended at both ends on a solid frame. One design features separate coils made out of multiple turns of thin copper wire strategically placed on the membrane, whereas the other uses a single elongated copper coil across each of the two parallel membranes. In both designs, permanent magnets mounted on the frame between the two membranes. Wind flow causes the membranes and the copper coils attached to them to flutter, which in a stationary magnetic field produces a time varying magnetic flux through the coils and induces an AC electric current. The number of turns in the coils, the magnetic field strength, the frequency of flutter controlled by the wind speed, and the design dimensions all influence the magnitude of induced current, and therefore, the maximum electrical power that can be produced. This output would then be put through a transformer to increase the output voltage, and then that voltage will be converted to DC using a rectifier circuit.
At the scale of our design, the Windbelt has three primary benefits over other common renewable energy harnessing systems: solar panels and wind turbines. First, due to its lower profile it can be incorporated into the architecture of buildings and provide invisible energy generation. Second, the device is highly modular, having the ability to be connected in either series or parallel in order to generate greater voltages or currents. The frames of the Windbelts will be designed such that multiple belts can be easily connected together in both configurations. Third, the Windbelts are significantly easier to maintain than solar panels or turbines particularly at locations that are not easily accessible. There are no moving parts in the conventional sense and the belts are easily replaced by non-technical personnel.
Shruthi Muralidharan, Kristina Hendel, Joseph Newton, and Vivek Patel
Ligamentous and bone injuries in the wrist affect tens of thousands of adults per year and leads to abnormal function. Surgical procedures as well as physical therapy intended to restorefunction have room for improvement. Measuring wrist kinematics of the small carpal bones is necessary to understand the effect of ligamentous injury during normal motion.Currently there are motion analysis systems that are used to track large scale movementfor total body kinematics such as gait analysis. The accuracy of these systems is catered toward capturing gross movement and cannot precisely measure on the order of millimeters necessary for carpal kinematics. There are some devices currently on the market that can measure the kindematics of a cadaveric wrist, however they either us expensive CT and X-Ray technology, or require physical contact with the specimen that might affect the accuracy of the data obtained. Additionally, these devices cannot measure the continuous motion and only determine the location of wrist and carpal bones at the beginning and end of movement.
We propose a non-contact system for measuring wrist kinematics that can accurately and precisely measure the three dimensional movement of the scaphoid and lunate. Three designs for markers were considered; passive, active, and magnetic. Initially we decided active LED markers would be the best option for our project needs. However, after working with active LED markers we determined limitations associated with the markers like wiring that would get in the way of measurement. Thus, we decided to develop passive (not electrical) markers for our system. We created a system of color coded passive markers in order to record three dimensional movement. In addition, we began computational analysis via Matlab to identify the active markers in an image and calculate the distance between them. We have created a three dimensional matrix on Matlab in order to map the movement of each marker. Moving forward we will create an algorithm that can calculate the relative position of these two bones in a three dimensional space.
The main deliverables of the product are a working prototype, consisting of a frame and passive markers, and the algorithm that can identify and measure the motion of the markers on a video recording to calculate the wrist kinematics. Thus far progress has been made toward creating the physical working prototype and the algorithm. In the end patents for the finished product and associated algorithm will be necessary.
Ryan Murphy, James Cecil, and Joseph Contarino
Knowledge discovery is a critical function of infrastructure protection in the U.S. By analyzing key text documents, we can gain insight into the interwoven and interdependent infrastructure system of the U.S., and better understand the security aspects of the system as a whole. Massive amounts of relevant data resides in text documents, which must be gathered and parsed to be analyzed on a large scale. Our algorithm collects web-based text embedded in HTML pages and analyzes it in various ways to decipher similarities. It will be a needed component of the larger system being developed by the Idaho National Laboratory, which will seek to accomplish what was described above. By analyzing the similarity of these HTML documents, we are helping the Idaho National Laboratory to keep redundant data out of the database. Without proper parsing of similar data, repetitive entries may clog the system with unneeded information. We attack this problem by providing a series of interfaces, each culminating into the same comparison algorithm. The interface can accept a raw String, a text file, or a web URL. The BoilerPipe library is used to extract useful text from the HTML document, by stripping the document of its tags, and using a series of filters to acquire desired text. A simple Java scanner is used to parse the text file. This text is then lemmatized, stripped of punctuation, converted to lowercase, stemmed, and put into a term-document matrix. Finally, we use cosine similarity to generate a proper percentage point representing how similar or dissimilar the two provided text documents are.
Tri Nguyen, Tom Vaele, and Zachary Johnson
In this digital world, access to information is essential. We need it and almost every day; Google, Bing, Yahoo, or one of these other major services that help us find what we’re looking for so we can use a resource or answer a simple question. SearchBlox, an enterprise elasticsearch toolkit which boasts a robust and easy to use indexing system for an array of MIME types lacks an essential capability: remoting. The scope of this problem requires the ability to efficiently index particularly voluminous, dense, or distributed file-systems to a centralized SearchBlox indexing server while keeping the services that clients expect from this software intact. More specifically, the ability to be easily deployed as a remote agent, access the indexed documents via a central server, remain fault tolerant, and react to changes in the file-systems in question.
The work here is important for maintaining accurate and up to date indexes to information. This problem is challenging in lieu of the sheer amount of information which is growing at an alarming rate. Storage of this information means more distributed file systems because of current hardware capacity. Users need access, and chances are – the documents will not be local. If a solution to this problem is successfully implemented, users will possess a fault tolerant streamline to the information they need.
Our approach stemmed from Requirements Engineering. We reduced the end-goal into modules and worked to produce a system of Akka actors, each with a job and a role. To achieve the performance required we implemented an Apache-esque Hadoop cluster worker nodes. Stylistically, we were agile in that each member of our team had a task to work on and often each task was dependent on each other’s modules. We took the bottom up approach, building the necessary tools to crawl, parse, and make indexes on a remote server concurrently. Then pursued the other required capabilities of reactive and ensuring the availability of files from the indexing server.
We were successful in streamlining a large volume of data to a local SearchBlox server and expect reactive, real time, and deployment capabilities in the near future.
We expect that our agent will benchmark well on the terabyte scale and will be performing metrics on several different hardware platforms to support such claims.
Nicholas Rivero, Peter Quach, and Zack Jillani
The Formula SAE design project enables engineering students to develop, analyze, build, and benchmark a Formula-style racecar. A differential was designed for VCU’s car to ensure that power is evenly distributed to the drive wheels under all possible handling conditions. A mounting system is required to secure the differential to the frame. Failure of this carrier could be catastrophic, therefore precise consideration of all possible forces and loading situations is required for design. A team was developed to examine how dynamic forces and fatigue contribute to component failure. Based on the results of this analysis a differential carrier can be designed and fabricated to ensure safe, reliable operation of the VCU Formula SAE car. After determining the magnitude of applied loads, finite element analysis (FEA) software was utilized to examine the loading effect on various design configurations. FEA allows accurate inspection of several nodes and elements within a specific component. This analysis allows one to determine material that can be removed from areas that experience no load allowing for a more efficient and effective design. Regions that exhibit high theoretical stresses can be redesigned to meet FSAE engineering standards allowing for efficiency in design while maintaining structural integrity. Numerous constraints must be considered including but not limited to the following: 1) Compliance to SAE regulations 2) Installation to a SAE designed frame 3) Budget limitations that preclude exotic materials or high-precision machining Preliminary results of the current iteration indicate very acceptable stresses and deflections. Further refinement will focus on weight reduction and improvement in machinability of the carrier. A finished product will be delivered by Q1 2015.
Josh Rymer, Chris Neville, and Robert Hodges
Problem: How can the effectiveness of a phishing attack be quantified and/or measured?
Applications: This project will provide a resource for Idaho National Labs to quantitavely evaluate the effectiveness of their security awareness program in regards to phishing attacks. In turn, it will aid them in hardening the human element of security at the research facility.
Approach: Our approach is to construct a fully functional phishing system where we can craft phishing emails, send emails, and place links that point to our web application. We hope to use this system to conduct an anonymous and non-malicious experiment. This data will assist in the design and implementation of the algorithm that will evaluate the relative effectiveness of a phishing email.
Interim Results: At this point in time we have have started the experimental approval process and developed a functioning phishing system to use in our experiment. We have created the framework in which to construct our algorithm.
Anticipated Results: Next Semester we plan to have a fully functioning phishing email evaluation algorithm. In addition are trying to run a live phishing study at VCU and if it is approved, itwill provide valuable data on the accuracy of our algorithm.
Ishoc Salaam, Stephen Holder, and Taylor Fines
Can you imagine starting your car, pushing a button, and it drives you to work? This technology is not as far in the future as you may think. Cars such as Mercedes Benz’s S-class and Audi’s A7 prototype have a traffic jam assist feature which at speeds under 40 mph, a button can be pressed and the car will drive itself on the highway. The system uses an array of cameras, sensors, and radars to follow the car in front at a safe distance while staying within its lane. If another car cuts in front of it with the feature active, the brakes will be automatically applied and the car will be again adjusted to follow at a safe distance. The driver can take control of vehicle by replacing his or her hand back on the steering wheel. This technology is not in your more common cars, but most new cars have automatic braking assistance, or object avoidance. These systems use sensors and radars to apply the brakes fully or partially to avoid an object or person that may be the in the vehicles path. Some systems have the ability to automatically steer around the obstruction. Because this technology is very recent and developing rapidly, automotive technician curriculums and simulators used to teach trouble shooting methods have not caught up. Though you can find these systems on most new cars and the automotive industry seems to be moving towards developing autonomous vehicles, there is no company currently manufacturing simulators equipped with this technology. Teaching students how to trouble shoot cars equipped with these systems presents a great challenge for automotive technician instructors. In an attempt to update its automotive curriculum to adapt to new technology, J Sergeant Reynolds Community College (JSRCC) ordered several new instructional simulators. These simulators use recent automotive technology, but are very limited in object detection or automatic braking. For Instance, the OEM light and accessory system with doors can do any lighting or door function of a Chevy Cobalt. The simulator is a unit equipped with Cobalt doors which open, it stands still and allows students to trouble shoot any of the lighting or Door functions of a Cobalt. The electronic system has preprogrammed bugs that students must trouble shoot, identify, and fix as a part of their training. JSRCC wants an actual vehicle that moves remotely, can be equipped with object detection, automatic breaking, and steering systems to give students a real life interactive instructional simulation. This will be a one of a kind instructional tool that will give students the ability to see these systems work, then by applying a preprogrammed bug the students will have to troubleshoot and repair the vehicle so that it is operating properly again. The plan in achieving this goal is to design a remote controlled chassis equipped with a steering and braking system. The chassis will have an override that can be integrated with an object detection/automatic braking system. The OEM light simulator will be attached to the chassis by permanent means. The completed instructional vehicle will be remote controlled and have all the lighting and door features of a Chevy Cobalt. It will also be able to be able to detect an object in front of it. If the object is big enough to cause damage to the car or harm to its passengers, the object detection system will override the remote and automatically apply the brakes stopping the vehicle. If the object can be cleared by a slight veer, the system will override the steering system to steer around the object while applying the brakes enough to complete this maneuver. The mechanical engineers on this project are specifically responsible for designing and manufacturing a remote controlled chassis with an electronic steering and braking scheme, attaching the OEM light and accessory simulator to the chassis, and incorporating an override of these systems for an object detection arrangement.
The optimization and comparison of a cerium salt-based phosphate filtration system to industry standard phosphate removal water filtration systems
Nicholas Seymour, Snehi Shrestha, and Laura Viktoria Pretzman
Phosphorous, being one of the fundamental building blocks of life, has been linked with the increase of cyanobacteria and algae growth. One of the main ways that phosphorous enters the marine ecosystem is through the form of runoff phosphates from fertilizers and industrial processes. In addition to disrupting the biology of ecosystems another drawback of excess of phosphates is the tendency for these chemical groups to form scale buildup on metal through the form of calcium phosphate.
The goal of our project is to investigate a novel way of removing phosphate groups from wastewater with the use of the rare earth element cerium. Current methods that remove phosphates use iron and aluminum salts to react with the phosphate, creating crystalline complexes that can be filtered out as a precipitate. Our team hopes to evaluate a similar precipitation reaction with phosphate and cerium and evaluate the optimal parameter that promote the desired reaction. The main benefit of this reaction is that its crystals are exponentially smaller than the ones produced by the traditional phosphate removal processes (indicating that this novel method might be more efficient than the traditional methods).
The main challenge of this project is to properly characterize the produced cerium phosphate crystals and design an industry-scale filtration process that would remove these crystals effectively. This analysis will be completed with the use of analytical chemistry techniques, nano-characterization equipment and process engineering design work. The filtration system design process will focus on optimizing produced crystalline complexes and then collecting and removing them effectively for large quantities of water. This project is being conducted with the support of ChemTreat, and with this partnership we hope to compare this new cerium based phosphate removal method with the current ferric-aluminum based methods. The results of this comparison could lead to the development of an entirely new and more effective method of removing phosphate from wastewater.
Kal Stankov, Allen Woods, and Yaw Amoatin
Our improved automated disc kiosks are designed with two purposes in mind: to improve the inventory management and agility of current automated media rental services and to simplify and consolidate the storage and distribution of installation media in an IT environment. The kiosks combine the ability to store and dispense optical discs (CDs, DVDs, Blu-Ray Discs, etc.) with the ability to burn newer content pushed out via a network connection from a company’s servers. Designed and built with commonly available components to minimize costs, the prototype consists of a Digilent Zybo FPGA board powered by a Xilinx processor. A tablet is used simply as a touchscreen monitor connected to the Zybo. A CD drive with burning capabilities, a label printer, and an external hard drive are all connected via USB. Finally, the internal mechanisms, including the various servos, are connected directly to the Zybo’s I/O ports. Rental companies can benefit from these kiosks by reducing or even eliminating the number of workers and vehicles needed to restock machines and redistribute existing inventory. Each kiosk can report its current inventory to a management system, through which licenses for new media may be purchased or re-assigned and a disc containing the content can be created by the kiosk. This means an end to running out of a certain movie or game. Demand and predictive algorithms will determine where media is distributed. Finally, older titles can be overwritten to make space for newer content, all without the need for a person to service each kiosk. IT departments throughout various industries still rely on disc media to install major systems and applications; our kiosks provide an easy way to maintain a catalog of software which is easily accessible and compatible with existing technologies. Employees no longer need to keep huge binders full of CDs, with duplicates between many. A central kiosk can store those discs and burn new ones from images, all in a single hassle-free device.
Vattana Vichith, Stephen Wu, and Bowen Zhang
In today’s day and age, data should be accessible at all times. The biggest break-through for data accessibility is mobile technologies such as phones and tablets. The CCTR provides a continuum of informatics research and services to support translational and clinical research. Clinical Trials represent one of the central themes for the Center for Clinical and Translational Research, but they do not have a mobile app for the VCU community to access CCTR’s informatics resources.
This project aims to promote the expanded informatics research and services available to VCU students, faculty and staff as well as patients interested in discovering more about clinical research at VCU. The CCTR wants to extend current research data management systems and traditional webpages to mobile technologies. This will enable the CCTR to provide the CCTR user community with seamless access to its current and developing infomratics resources.
The project followed the agile development methodology. Each week, we created new features for the mobile app, slowly adding onto the initial app we created. The major goal of the project was to be able to pull data from the Forte API. Extra features were added on later on for the overall user-friendliness. The app primarily focused on function over form. In the end, we tried to stick with VCU colors. Over the course of the project, we encountered a few issues along the way. None of us have had experience programming for the android OS. We were familiar with java, but the android library had many more requirements to get everything working. We needed to learn to program for the android OS and also learn new technology associated with mobile app programming. Another issue we came across was scalability, getting the app to comply with VCU branding seemed simple at first, but when we started adding in logos, we encountered a lot of errors. The logos had to be refactored to fit 100% with the application.
The CCTR now has a fully functional Clinical Trials Android Application. Over the course of the final semester, additional features will be prioritized based on complexity, and importance to the CCTR and included in the mobile apps.
Features that impact the access of information and benefits the CCTR will be added as the project progresses. The final goal will be to create both an Android and an iOS app. Before the apps can be officially finished, a live instance of data will be needed that utilizes VCU resources for accessing data about VCU’s clinical trials.
Ngoc Hue Vo, Swarna Chowdhuri, and Ivo Yotov
According to an Audi Urban Future Initiative study, the average person spends 106 days over their life-time searching for parking spaces. Whether it is on the side of a busy city street or a shopping center car park, the issue of parking private vehicles poses a substantial logistical challenge that scales in complexity along with population density. As modern populations trend towards urbanization it becomes imperative to develop more efficient parking structures. With the inevitable shift towards driverless vehicles, there exists a need to establish a control system to mitigate these complications. One embodiment of such a solution is a distributed sensor network feeding real-time data to a central management system which delegates navigational directives to individual vehicles based on algorithms designed to maximize spatial and temporal efficiency. This method would rely on wireless radio communication between the host and client nodes with a static sensor providing state feedback information enabling a non-causal autonomous parking process. The project strives to streamline the process of finding a vacant parking space while ensuring client safety through the direction of localized traffic by means of an optimized control scheme determined by the central server leveraging data collected from the sensor network. Such a mechanism would not only improve safety and efficiency by reducing collisions and time spent searching for open spaces, but also obviate the need for driverless vehicles to have prior knowledge of the destination layout by having the information available locally and on demand.
Zarwan Waqar, Sohail Hossini, and Saman Usodan
To generate additional force that is applied to the rear of the FSAE race car The importance of the rear diffuser is to add a rear force to the vehicle. The rear force is generated from the airflow coming from underneath the vehicle. The way the airflow creates a force is through generating a pressure differential in which the air exiting the rear of the race car is now causing a downward force from the diffuser. The use of a diffuser is known to be of use at high speeds and if designed correctly creating the aerodynamics needed to keep the vehicle better grounded in the rear. The model was created through SolidWorks and tested using ANSYS to determine whether the design has met expectations. A main goal is to make sure that the diffuser can be as effective as possible while maintaining budget limitations. Design for a rear diffuser that is to be placed on the tail end of the VCU FSAE race car. This generates a downward force to help with the traction of the race car, through the unique design and modeling created on Solidworks while being tested through ANSYS.
Morgan Waser, Brandon Perkins, and Benjamin Koppier
Problem: In the coming years the United States is working to move towards a smarter electric grid. One that is more versatile and can adjust to different situations. To move forward with these from our current electric grid to the Smart Grid, simulations are needed to understand how the new grid is likely to behave in different situations.
Rationale: The importance of creating these simulations is to try and predict which topological set-ups are best for different kinds of scenarios as well as how the grid might behave under different circumstances. Once different situations are simulated, Smart Grid developers can use the simulations as a guide for building the physical and digital Smart Grid.
Approach: Our team used the simulation software NS3 to write our Smart Grid simulation. We are beginning with a basic network topology and trying to implement five use cases on this network. From there we hope to be able to expand to be able to look at slightly different topologies and compare their performance completing the use cases. The use cases are: on demand meter read, on demand meter read failure, on demand meter interval period read, normal meter reading operations, and bulk meter interval data read.
Interim Results and Conclusions: We have begun with a simple topology (shown below) with a point to point connection between individual meters and the data concentrator, which then send information to the data and control center. We have begun implementing the use cases while monitoring time for completion, loss of information and other important factors we would like to compare across all of our various topologies.
Anticipated Results and Conclusions: At the conclusion of this project, we hope to have a functioning simulation that can compare and assess different topologies and network set-ups.
De-Shunda White, Yamil Boo Irizarry, and Samuel Brazil
In order for the Virginia Commonwealth University Engineering Human Resources department to provide seamless new hire integration, they need to upgrade their current post-hire information gathering techniques. Currently the HR department continues to provide paper forms and packets to new hires to fill out and return for review. This method requires over $30,000 in cost for materials and labor. The main goal of our project is to establish a way for newly hired employees to complete their new hire forms electronically through the VCU server.
By successfully executing our project we are moving VCU’s onboarding process to a completely paperless process where not only the pre-hire application process is submitted electronically, but also the post-hire information gathering process. Our project will allow VCU to go paperless, reducing the risk of outdated forms circulating, obtaining required information for American’s with Disabilities Act disclosure, and improving first impressions through branding of professional software. This completely electronic hiring process does not only save time for both the HR Representatives and newly hired employees, but also saves money for VCU in labor and material costs. Lastly, it will improve overall efficiency within the Human Resources department allowing for quicker employee integration.
We first began our project by reviewing the work already done by the previous group the year before. After looking through the different files and folders we spoke with our team advisors to gather information about their expectations for the onboard site and to collect business requirements. The different forms and requirements gathered were put into an Excel sheet in listed form to be shared by all individuals involved with the project. The group broke up the work accordingly and set goals to transform the old framework into a working site. We constantly check the requirements given and new ones discussed in meetings to provide the appropriate changes to the site. For each new business requirement transformed into a technical requirement we test at every level of development.
Throughout this semester’s development we have done a couple of live demos displaying our prototype at different stages. We received positive feedback from our faculty advisors on the progress of our work and future expectations for our project.