Automated Measurement System – Verifying Flow Rate Performance in a Medical Device
Assessing quality and managing traceability of a medical device
Automation increases throughput by testing multiple units independently
Worldwide supplier of products for surgery
Our client wanted to perform detailed quality checks on the performance of some of their fluid dispensing products. These products dispense fluid over long periods of time (hours) so rigorous testing of each unit in production would take too long. But, being medical devices, these products need to follow ISO 28620 standards and FDA 21 CFR Part 820 regulations, so a rigorous test process is helpful to fulfill these needs. The automated test system below is part of that overall fulfilment.
The client came to Viewpoint with the following high-level desires for a system to test the product:
Support independent configuration and testing for up to 6 units.
Simplify overall test setup by copying the configuration from one unit to another.
Handle different volume amounts supported by assorted models.
Measure from each unit the weight of fluid dispensed as a function of time. (Weight was converted to volume using the fluid density.)
Compute the “instantaneous” flow rate (volume vs time) as the test progresses.
Keep track of the calibration status of the weight scales at each of the 6 positions in the test system.
Enable some measurements to be excluded at the start and end of the run for calculations of average flow rate.
Provide graphs and metrics of results to enable faster review of the data during the test.
Add ability to comment on each unit while testing is in progress, and after, and track these for compliance.
Print (PDF) a report on each unit’s results along with its identifying info, such as subcomponent lot numbers, the test datetime stamp, and the operator’s ID.
The client had an initial version of the testing application which they developed for testing their various initial design iterations. Viewpoint enhanced this existing application with the features listed above. The motivations for this enhancement were to:
Automate testing of the initial production units more thoroughly than the existing application allowed.
Enhance the user experience for easier testing.
A major aspect of the user experience was to support testing of multiple units independently from each other so that unit testing on one device could start/stop while other devices were installed in (or removed from) the tester without interrupting tests on other units.
This need was especially important since some models might take twice as long to test as other models or the setup of one unit might need additional adjustments before starting. This meant that starting and stopping all at the same time could reduce utilization of the tester, and it made sense to have independent and parallel operation.
Furthermore, with parallel testing, the tester was not constrained to having a full set of units to begin testing operation, since the tester could run with only 1 unit installed.
The other major enhancements focused on the user experience by offering real-time data of a particular unit’s testing, as well as real-time graphs to show progress. These graphs were useful because the operator could clearly see when a unit was not performing as expected and the test for that unit could be aborted without affecting testing on the other units.
Since this testing needs to follow the requirements in the ISO 28620 standard and 21 CFR Part 820, it was important for the application to be aware of the calibration status of each of the 6 weight scales, one for each position in the test system.
The main goal of this project was to augment the test operator’s ability to set up the testing of products while providing real-time visual feedback to the operator about the testing status.
Some of the benefits of this automated test system were:
The test automation provided consistency resulting from the software-enforced test process.
Test status via the visual feedback helped increase the efficiency of the operator.
Enhance the user experience for easier testing.
The application was developed in LabVIEW and measurements were made via RS-232 communications to each of the 6 weight scales. Once a unit was installed in the tester and the operator started the test on that unit, the application:
tared that unit’s scale,
started the flow,
and collected weight measurements frequently to build a curve of “instantaneous” flow versus time.
These “instantaneous” flow numbers were used to compute statistics such as maximum flow and average flow over the course of the entire test run.
Some of the primary functionality of this system includes:
Independent tests: testing a specific unit doesn’t interfere with testing of others.
Graphs of flow rate during the test execution: visualize the unit’s performance during the hours-long test time, not just at the end.
Calibration check: the next calibration date of each of the weight scales are maintained in a configuration file, so the operator can check that the test system is ready to run a test at a particular position.
Different volumes: since each unit is tested independently, the system can handle different volumes for each position in the tester.
Enhanced commenting: operators can enter comments about each unit (or all of them) and have these comments archived for compliance purposes.
Logging of weight versus time for up to 6 test positions via RS-232 communications to scales.
Manage the configuration and commenting of the test.
Save, recall, and copy configuration information to ease the operator’s setup effort and time.
Display graphically the curves of flow versus time while the test progresses.
Compute statistics on the flow after test completion.
Multiport ethernet to RS-232 serial communications.
Custom Automated Test System – Quantifying Energy and Durability Performance for Refrigeration
Automation reduces manual labor while improving traceability
Assessing performance for improved energy ratings and longevity
Client – Zero Zone – Commercial refrigeration systems manufacturer
Zero Zone wanted to improve the capabilities and durability of their new reach-in refrigeration products.
You might think that refrigeration is a mundane product line, but that is just not true! So many innovations are occurring as manufacturers are redesigning their products to improve their environmental footprint through better energy efficiency, coolants, and durability.
Assessment requires an understanding of the performance of the refrigeration units under many conditions. Zero Zone was taking measurements with a datalogger with too few channels, and no synchronization, to other devices that feed into the system. Plus, they had multiple models of their reach-in refrigerators that needed to be assessed. Furthermore, simplifying the data collection and analysis would make it easier to validate against ASHRAE standards.
Zero Zone came to Viewpoint with the following high-level desires:
Expand the measurements by adding more channels and channel types (e.g., 4-20 mA, ±10 VDC and digital I/O).
Provide graphs and KPIs to enable faster analysis of the data during the test.
Minimize the chance of data loss during long test runs.
Synchronize data collection and actuation.
Automate storage of measurements per a user-defined period to eliminate manual start/stop of data collection.
Simplify the manual configuration setup.
Enable a way to find relevant data perhaps months or years after the test run.
Viewpoint developed a monitoring and control durability test system that could exercise Zero Zone’s refrigerators through hundreds of operation cycles over multiple conditions to simulate actual usage in, for example, a grocery store.
During initial conversations, we collaborated closely with Zero Zone to brainstorm on some potential approaches. We made some suggestions that could satisfy their desires while also managing their time and cost budgets.
For example, by automatically populating the cells in an Excel template based on their original systems’ Excel spreadsheet, we provided streamlined report generation without having to rewrite all the calculation code embedded in their Excel file in another app. The compromises we jointly endorsed were:
Run an app on a PC to configure and monitor the test.
Use both NI Compact RIO and Compact DAQ to enable robust and synchronized data collection and control with the ability to expand channels by adding modules in both the cRIO and cDAQ chassis.
Store data on a local PC rather than a remote server to minimize the probability of data lost during the test run.
Save configurations into Excel files for recall, and cloning, of prior setups.
Write measurements automatically into the same Excel file for archive of the test setup and measurements.
Create, in this same Excel file through cell formulas, the summary report from the summary calculations. This approach allowed flexibility for changes to internal and external test standards.
Upload the summary data and test reference info into a SQL database for data management and long-term test statistics.
Digital outputs (DOs) were used to control various aspects of the test, such as door open/close and defrost on/off cycles. For flexibility, the user can specify the sequencing of these DO channels, in the Excel file used for the test, with various parameters that define the duty cycle, period, number of cycles, and start delay. The timing of these DO state changes was synchronized to the data acquisition by the real-time loop in the cRIO.
This system was deployed to 6 test bays, each one of which might be testing a unit for as little as a few weeks or as much as a few months.
The main goal of this project was to reduce the effort and associated human error in the design and execution of the test run.
Some of the primary benefits for this automated system were:
Reduced Errors: pre-verified template files used for test configuration and data storage lent consistency to test setup and execution.
Less Testing Time and Effort: the automatic execution of the test and storage of measurements enabled running tests for multiple days (and nights) without technician interaction. Technicians could work on setting up other units for test rather than babysit the existing test. On average, based on the duration of the test time, testing throughput increased by approximately 25% to 40%.
Shorter Reporting Time and Effort: reports were available about 85% faster than the time previously spent creating manually. The quicker feedback saved costs through early detection of unit problems and faster teardown at the end.
Some additional major benefits were:
More details on refrigerator operation: “Wow! We never saw that before.”
Database consolidation: statistical analysis takes hours not days and includes all tests run in the lab, not just ASHRAE tests. This central database enables long term retrieval of all test data.
Reuse: techs embraced ability to reuse and modify previous setups.
Consistency: driving the test definition through an Excel file encouraged uniformity.
Traceability: documented and timestamped calibration measurements.
Flexibility: channel counts, acquisition modules configuration, calibration, and calculation formulas were straightforward to change for new test setups.
The test automation provided by this system greatly reduced the labor involved in configuring, running, and analyzing the test run. Furthermore, the customer benefited from the consistency that resulted from the software-enforced process.
We developed the application in LabVIEW and LabVIEW RT combined with a cRIO connected to a cDAQ via TSN Ethernet.
The data acquisition modules slotted into the cRIO and cDAQ chassis handled the I/O to the customer sensors and actuators. The sensors mostly measured:
Data logging of between 50 and 150 channels and control via digital signals
Interface with Excel files for configuration, data logging, and summary calculations
Custom Automated Test System – Characterization of Heat Transfer System Thermal Performance
R&D testing required flexibility in control schemes and measurement I/O
Client – ATSI, a large-scale System Engineering Provider
Our client, ATSI, Inc., headquartered in Amherst NY, designs and builds complex structures and process systems, from industrial construction projects to mechanical systems for power engineering. A previous, long-standing Viewpoint customer that does research and design of thermal energy systems approached ATSI to engage in the build of a specialized test skid that would be used to assess and characterize a heat transfer system. Our long-standing end-customer requested a data acquisition subsystem based on LabVIEW.
Furthermore, the end-customer requested a subsystem that supported flexibility in the data acquisition by channel count and type, since the R&D nature necessitated adaptability. The overall test system needed to automate progress through a sequence of setpoints and ramps.
ATSI designed the automated control and sequencing with a Modicon PLC. Viewpoint augmented ATSI’s engineering resources to provide the data acquisition subsystem and setpoint sequence editing. This sequence was passed to the PLC for automated sequencing thought the setpoint list. Because our mutual end-customer did not provide explicit design details, we had flexibility to decide which aspects of the control and data acquisition needs would be automated by the Modicon PLC and which by the PC running LabVIEW.
Since Viewpoint had previously developed a similar application for our end-customer, with some of the required data acquisition needs, we chose to leverage and enhance that software platform for this project. That choice drove some of the other designs and defined the scope of work for Viewpoint and ATSI.
Some overall design decisions were:
The LabVIEW application provided data acquisition, test configuration, and operator screens.
The Modicon-based subsystems provided process control and safety.
A PLC HMI for process system operation and status as well as control loop tuning.
NI Compact DAQ (cDAQ) offered flexible PC-based acquisition channels for high sample rate historical data collection.
A sequence editor on the PC defined the test setpoints, durations, and limits to pass to the PLC for execution.
The test configuration encompassed cDAQ channel configuration, PLC tag configuration, sequence editing, and graphical views on the acquired data. Some channels were acquired at slow rates, e.g., up to about 1 S/s for sensors measuring parameters such as temperature and flow, while others had fast rates, e.g., 1 kS/s to 10s of kS/s for sensors measuring parameters such as transient pressure and vibration. Handling the datafile storage and display of this wide range of data types and rates was important for the end-customer to compare and correlate the effects of changing operating conditions.
Data logging is configured by the sequence editor to occur on certain conditions such as immediately entering a new step, time delayed after entering a step, and activated by the PLC upon reaching stable setpoint control. This flexibility gave the end-customer management of when data collection occurred to ease the comparison and correlation of readings from selected sensors.
After the configuration is completed, the sequence is passed to the PLC. The operator starts the test on the PLC HMI and the PLC automates the test run. Data collected during the test run could be displayed in live graphs during testing, used for verification of setup and operation; post-test in stacked graphs and overlaid plots; and exported for specialized analysis, display, and review.
The design of this system was driven largely by the need for flexibility. Sensors and channels could be added, the test sequence could be edited with a variable number of steps with editable execution features, and the data acquisition and storage permitted various rates and logging criteria.
These design choices offered the following advantages:
As a partner of the team, Viewpoint acted as staff augmentation for ATSI by providing experienced engineers with expert LabVIEW and data acquisition capabilities.
Flexibility of test sequences, including setpoints and their stabilization criteria.
Tight integration between the Modicon PLC and LabVIEW-based PC enables critical control and safety to execute reliably and yet adjustably.
Customized mechanical all-welded skid plugs into end-customer’s test article.
Setup of data logging including configurable sample rates.
Ability to add channels by plugging in supplemental DAQmx-based cDAQ modules.
The LabVIEW application architecture is actor-based for straightforward inclusion of new data sources as needed in the future.
New data sources are registered with the object-based data aggregator.
The system handles multiple days of test execution
The multi-pronged viewer allows verification checks while in setup an operation as well as post-test review.
The custom automated test system supplied to the end-customer was a hybrid, made up of PC-based and PLC-based components coupled with the fluid-handling components on the skid. The hardware listed below includes only the data acquisition, control, and safety items, and only a high-level description of the mechanical aspects.
NI LabVIEW for Windows [Viewpoint]
NI LabVIEW Modbus driver [Viewpoint]
NI DAQmx hardware drivers [Viewpoint]
Actor-based object-oriented LabVIEW application for the PC [Viewpoint]
Modicon Concept software [ATSI]
Blue Open Studio HMI software [ATSI]
Function Block Programming for the PLC [ATSI]
Modicon PLC and modules for pressure, temperature, flow and other process variables
NI Compact DAQ modules, including 4-20 mA, RTD, thermocouple, thermistor
600 VDC Power supplies
Components to flow fluid, including pumps, valves, pressure regulators
Replacing Wire-wrap Boards with Software, FPGAs, and Custom Signal Conditioning
Electronic components of fielded systems were aging out Reverse engineering effort converted wire wrap boards to FPGA-based I/O
Client – Amentum – A supplier for Military Range System Support
Amentum (www.amentum.com) supports a decades-old system deployed in the early 1980s. While the mechanical subsystems were still functioning, the wire-wrapped discrete logic and analog circuitry was having intermittent problems.
Systems designed and built decades ago can sometimes have wonderful documentation packets. Nevertheless, we’ve been burned too often when the docs don’t incorporate the latest redlines, last-minute changes, or other updates.
The replacement system needed to be a form-fit-function replacement to land in the same mounting locations as the original equipment with the same behavior and connections. Below is an image of the existing wire-wrap boards and their enclosure. We had to fit the new equipment in this same spot.
Figure 1 – Original wire-wrap boards
Finally, Amentum wanted to work with Viewpoint in a joint development approach. While our joint capabilities looked complementary, we didn’t know at the start how well we would mesh with our technical expertise and work culture – it turns out we worked extremely well together as a team and neither one alone could have easily delivered the solution.
Since the team treated the existing documentation package with suspicion, we adopted a “trust but verify” approach. We would use the documents to give overall direction, but we would need details from the signals to verify operation.
Leveraging Amentum’s experience with the fielded systems, the team decided early on to record actual signals to understand the real I/O behavior. We used the system’s “test verification” unit to run the system through some check out procedures normally run prior to system usage. This verification unit enabled us to use a logic analyzer for the I/O to and from the discrete logic digital signals and an oscilloscope and DMM for the analog signals. The available schematics were reviewed to assure that the signals made sense.
With a trustable understanding of system operation, Amentum created a requirements document. We jointly worked on the design of the new system. There were both an “inside” system (in a control shelter) and an “outside” system (in the unit’s pedestal).
Some overall tasks were:
Viewpoint recommended an architecture for the inside application running on PXIe LabVIEW RT and FPGA layers.
Amentum created the system control software on a Linux PC.
Viewpoint developed the more intricate parts of the inside application and mentored Amentum on other parts they developed. This work recreated the existing discrete logic and analog I/O using PXIe NI FPGA boards.
Viewpoint designed custom interposer boards to connect harnesses to the NI PXIe equipment, including a test point and backplane boards.
Amentum designed and developed the cRIO-based outside system application and Viewpoint created a set of custom interposer boards to connect harnesses to the cSeries modules.
The PXIe FPGA boards handled the required 60 MHz clock-derived signals with correct phases, polarity, and so on. Furthermore, the wire-wrap boards were register-based so the PXIe had to decode “bus signals” sent over a Thunderbolt bus to emulate the programming and readouts from the various wire-wrap boards.
Figure 2 – PXIe replacement to wire-wrap boards
Amentum wanted to be able to support the LabVIEW FPGA VIs used to replace the functionality of the discrete logic. So, Viewpoint acted as mentor and code reviewer with Amentum to ramp them up on using LabVIEW FPGA effectively. Neither one of us alone would have been successful coding the applications in the allotted time. Joint knowledge and experience from both Viewpoint and Amentum were required.
Signal conditioning and harnesses needed to be reworked or replaced as well, of course, since the landing points for the wires were different in the new system. Viewpoint suggested a technique, which we’ve used frequently in past obsolescence upgrade projects, to create PCB boards that accepted existing connectors.
For the cRIO, these interposer “connection” PCBs plugged directly into the cRIO cSeries module. For the PXIe, these interposer PCBs accepted the field wiring connectors and converted them to COTS cables that connected to the PXIe modules. These interposer PCBs could have signal conditioning incorporated as needed. This approach significantly reduced the need for custom harnesses. All told, about 200 signals were passed between the PXIe and various other subsystems, and about 100 for the cRIO. This approach saved significant wiring labor and cost.
Figure 3 – cRIO with interposer boards between cSeries and field harnesses
The work to design and build the signal conditioning custom electronics was split between Viewpoint and Amentum. Viewpoint did more design than build and handed over the schematics and Gerber files to Amentum so they could manage the builds while also being able to make modifications to the boards as needed.
Amentum wanted an engineering firm that was willing to work along side them as a partner. Joint discussions about architecture and design led to a collaborative development effort where Amentum benefited from Viewpoint’s extensive expertise and guidance on LabVIEW architectural implementation and FPGA coding style.
The main outcomes were:
As a partner of the team, Viewpoint acted as staff augmentation by providing experienced engineers with technical capabilities that Amentum initially lacked.
This team approach delivered a stronger product to the end-customer more quickly than either of us could do alone.
The combination of Viewpoint’s and Amentum’s experience reduced the amount of reverse engineering needed due to the lack of firm requirements.
Reduction of electronics obsolescence by using software-centric FPGA-based functionality. Recompiled LabVIEW FPGA could target future boards models.
Increased software-based functionality simplifies future updates and modifications.
Decrease in number of parts leading to simpler maintenance.
Lower wattage consumed eliminated need for an anticipated HVAC upgrade.
Cybersecurity concerns were reduced by using Linux-based systems and FPGA coding.
Using software to emulate the old hardware was a critical success factor. Since the requirements were not 100% solid at the start of the project, some field-testing was required for final verification and validation. The flexibility of the software approach eased modifications and tweaks as development progressed. A hardware-only solution would have necessitated difficult and costly changes. For example, some of the changes occurred very near the final deployment after the system was finally connected to an actual unit in the field.
Emulate original discrete logic functions via FPGAs
Emulate original analog signal I/O
Overall system control via Linux PC
Maintain the same user experience as existed before
Modern application architecture for simpler maintenance
NI cRIO chassis with various cSeries modules
NI PXIe chassis with FPGA modules to handle all the analog and digital I/O via a combination of multifunction and digital-only cards
Custom PCBs for signal conditioning and connectivity
Enhanced Portable Data Acquisition and Data Storage System
Using a Real-Time Operating System (RTOS) provides a high level of synchronization and determinism for acquired data.
Tier 1 Automotive Design and Manufacturing Supplier
Our client had an existing data acquisition system, used for mechanical product validation testing, that had undergone many updates and patches for over 15 years. These updates and patches, performed by multiple developers, had rendered the software portion of the system somewhat unstable. Furthermore, the system hardware was based on NI SCXI, which was becoming obsolete. These issues prompted our client to migrate to an entirely new system.
New requirements for this upgrade included utilizing a PXI controller running NI Linux Real-Time, a RTOS, executing a LabVIEW RT application. The data acquisition software had to support a variable mix of signal conditioning modules in the PXI chassis. In addition, the data acquired from these signal conditioning modules needed to be synchronized within microseconds.
Viewpoint leveraged another application, developed for the client a few years prior, to harmonize the user interface and to reduce development effort. Most of the development time focused on support and configuration of the multiple module types and ensuring that the data synchronization functioned as required. The result was an ultra-flexible, portable, high-speed data acquisition software/hardware combination that can be used to acquire time-sensitive, synchronized data across multiple modules in a PXI chassis running a real-time operating system.
The upgraded system offers the following features:
Highly configurable real-time data acquisition hardware/software solution based on LabVIEW RT and PXI hardware. Our client works closely with OEMs to assure compatibility and durability with their products, often going to the OEM’s test cells to collect performance data. The configurability in modules and channels affords the fastest possible setup at the OEM’s site which minimizes time and cost in the test cell.
Configuration files stored in a SQL database format. Saving channel and module setups in SQL allows the test engineer to locate previous hardware and data acquisition configurations. The usual alternative is a bulk save of an entire system setup rather than using a more granular, and hence, more flexible approach afforded by using the database.
Immediate test feedback through graphs and analog indicators, used to assure data quality before leaving the test cell.
Data playback features after the data has been acquired, used for in-depth review of data after leaving the test cell.
Data acquisition on the RTOS provides assurance that the acquisition will not be interrupted by network or other OS activities, which was occasionally an issue with the prior Windows-based application.
Synchronization between signal conditioning modules ensures time-critical data taken on separate modules can be compared and analyzed.
The system consisted of custom LabVIEW RT software intended to run on an engineer’s laptop and the PXI real-time controller and a PXI chassis populated with a flexible assortment of NI signal conditioning modules (provided by the client).
The software used an object-oriented Actor-based architecture, which facilitates adding new signal conditioning modules and flexible communications between the host PC and the real-time controller.
Developing an industrial monitoring system for ultrasound-based sensing in a harsh environment
Client – Energy Research Lab
Our client was experiencing problems making temperature measurements in a hostile, irradiated environment. Traditional temperature sensors don’t last long in this environment, so our client was developing a sensor designed for these conditions.
Special equipment is required to drive this sensor. It’s an active sensor requiring an ultrasound pulser/receiver (P/R) and high-speed digitizer to make it function.
The prior attempt the client made at using an original set of special equipment was having reliability and connectivity issues. This reduced reliability was of critical concern due to the requirement for the sensor to operate for years without downtime.
In addition, the existing application was incapable of displaying live data and lacked a user-friendly interface. On top of that, data analysis had to be done after the application was run, causing delays.
Our client needed reliable and robust hardware to drive the sensors and an application that would eliminate the challenges associated with the existing system.
Viewpoint accomplished the following:
Evaluated two different ultrasonic P/R sensor driver hardware solutions to select a solution that would provide the connectivity robustness, configurability, and correct sensor driver characteristics required for the given sensors.
Decoupled the digitizer embedded in the original P/R by adding a PXI digitizer with better capability.
Provided backward compatibility with previous measurement hardware to aid in performance comparisons with the new hardware.
Developed a LabVIEW-based application that corrected all the issues with the existing application including real-time data analysis, real-time data visibility and a modern user interface. The new application also provided sensor performance traceability using the sensor’s serial number.
The enhanced measurement system offers the following benefits:
Reliable sensor subsystem to ensure uninterrupted data acquisition.
Measurement hardware configurability for sample rate, collection duration, and pulsing repetition rate.
Application configurability for automating the analysis, historical archiving, and results reporting.
Real-time data analysis.
Sensor traceability through serial number and data files.
Engineering mode to take control of the entire measurement system.
Improved data logging to include raw and analyzed data.
Improved application user experience via robust data collection and configurability.
The deployed temperature monitoring system consisted of the following components:
COTS pulser/receiver hardware for driving the sensors.
COTS high-speed DAQ for retrieving ultrasound signals.
A LabVIEW-based software application to provide real time data monitoring, error/alarm notification, data analysis, data logging, part traceability and backward compatibility with the older sensor driver hardware.
At maximum throughput, the Aedis systems needed to consume and produce more than about 800 MB/s/slot.
A large company involved in C4ISR was developing a system for a new high-speed digital sensor device. Viewpoint was contracted to build a test system used in design validation and ultimately endurance testing of the sensor. Since the sensor was a component of a larger system which was being developed at the same time, another test system was created to simulate the sensor by feeding signals into the system.
Both the amount of data and the frequencies of the various digital signals were nearly at the limit of hardware capabilities. At maximum throughput, the systems needed to consume during record and produce during playback about 800 MB/s/slot. The FPGA clock on the FlexRIO had to run up to 300 MHz. The skew between triggers for data transmission needed to be less than 5 ns even between multiple FlexRIO cards even when the parallel data paths has inherent skews associated with the sensor. Finally, the systems needed to handle clocks that might be out-of-phase.
Achieving these requirements required significant engineering design in the face of multiple possible roadblocks, any one of which could have eliminated a successful outcome.
Furthermore, as usual, the development timeline was tight. In this case, it was a very tight 3 months.
To meet the timeline, we had to work in parallel across several fronts:
LabVIEW-based application development for both record and playback
LabVIEW FPGA development for marshalling data between the controller and DRAM
Custom FAM circuit board design and build
FlexRIO FPGA CLIP nodes and code for low-level data handling
This sensor had several parallel data paths of clock and data lines with clock speeds up to 300 MHz on each path requiring exacting design and build of a custom FlexRIO Adapter Module (FAM) and unique custom CLIP nodes for extending the FlexRIO FPGA capabilities. The FAM also had a special connector for interfacing to the customer’s hardware.
Additional NI hardware and software completed the system components.
The choice to base the Aedis emulators on NI hardware and software was critical to completing this project. The open architecture in both hardware (custom FAM) and software (CLIP Nodes) enabled us to include some very creative extensions to the base toolset without which the project would not have succeeded in the allotted pressured schedule and on a predetermined budget. We were able to stretch the capabilities of the hardware and software very close to their maximum specifications by combining COTS and custom much more cost effectively than a purely custom design.
The host application, written in LabVIEW, managed the configuration of the data acquisition and the control of the LabVIEW RT-based FlexRIO systems. The configuration primarily dealt with the number of sensor channels in use, skew settings between digital lines, and other parameters that dealt with the organization of the data passed between the sensor and the FlexRIO.
Two FlexRIO applications were written, one for record and one for playback. Each FlexRIO application was written in LabVIEW, and managed the configuration of the FlexRIO cards and the movement of data between the FlexRIO cards and the RAID drives. Note that Windows supported for the RAID driver. Between 10 and 32 DMA channels were used for streaming, depending on the number of sensor channels being used.
And, each FlexRIO application had an FPGA layer, written in LabVIEW FPGA enhanced with custom CLIP nodes. For the record application, we developed a custom DRAM FIFO on the FPGA to assist with the latencies on the PXIe bus. For the playback application, we were able to stream directly from DRAM.
The FlexRIO and stock FAMs from NI were initially considered as candidates for this project. Clearly, working with commercial-off-the-shelf (COTS) components would be most effective. Three options were available at the project start which could accommodate the required clock frequencies, but none offered both the required channel counts and skew/routing limitations. Hence, we had to design a custom FAM. This decision, made before the start of the project, turned out to be wise in hindsight because the parallel development path resulted in some shifts of sensor requirements which could be accommodated with the custom FAM but might have led to a dead-end with a COTS FAM.
In LabVIEW FPGA, a CLIP Node is a method to import custom FPGA IP (i.e., code) into a LabVIEW FPGA application. CLIP stands for Component-Level Intellectual Property. We needed to use special Socketed CLIP Nodes (i.e., VHDL that can access FPGA pins) for this project because we could expose additional features of the Xilinx Virtex-5 not exposed in LabVIEW FPGA by accessing Xilinx primitives. Some specific features were:
Faster FPGA clocking
Additional clocking options
Individual clock and skew control
Custom PLL de-jitter nodes
Essentially, the FPGA design had a majority of FPGA code developed in LabVIEW FPGA and we used CLIP Nodes for interfacing the signals between the FlexRIO and the FAM.
FlexRIO Adapter Module
As mentioned earlier, we had to create a custom FAM because of the need to route high speed signals from customer-specific high density connectors while synchronizing signals across multiple data channels and FPGA modules to within one (300 MHz) clock cycle.
At these high-speeds, the FAM needed careful buffering and impedance matching both on the signals as well internal components on the FAM PCB. At the start of the design, we utilized Mentor Graphics HyperLynx High Speed DDR signaling Simulation software to minimize signal reflections prior to building actual hardware. This step saved countless hours in spinning physical hardware designs.
We designed the FAM to allow channel routing and access to additional clock and trigger pins on the Xilinx chip and PXIe backplane.
Automated Manufacturing Test System for Electronic Medical Devices
Using PXI and LabVIEW for modular testing of over 1,000 different models
Client – a medical device manufacturer and repair depot
Our client manufactures hospital patient pendants used to control bed frame, nurse calling, and TV functions. The company was also growing after adapting a business model of being a repair depot for older designs for their own and the pendants of other manufacturers. As such, their products are very high mix and medium volume.
The basic functions for all these pendant models are closely related, so the client wanted a means to build a single automated test system that could verify functionality for 1000s of models. And, since the products are medical devices, the testers needed to comply to 21 CFR Part 820 and Part 11.
The testers were designed to support the common measurements needed to test the circuitry of the devices as well as the complex signals required to drive TVs and entertainment systems. A test sequence editor was created which allowed the client to create as many test sequences as needed to test each specific pendant model by creating a list from pre-defined basic measurement steps configured for each specific measurement.
For example, each device had a power supply, the voltage of which needed to be tested. To test a specific model, a voltage measurement step was added to the model-specific sequence and configured with the upper and lower measurement limits for the power supply. The complete test sequence was created by adding and configuring other measurements test steps as needed. Each test step could also be configured with switch configurations to connect the measurement equipment, such as a DMM, to the proper pins on the device circuit board.
Using this configuration process, the client was able to support the testing of well over 1000 models without any programming. A separate application was developed to create these test sequences which were saved as XML and fed to the test system for selection and execution.
The test execution was managed by NI TestStand and the pre-defined common test steps were written in LabVIEW. The test sequences and test results were interfaced to the client SQL database which they used in their ERP system. This ERP system used the results produced by the test system to help manage the workflow of production, for example by assuring that all units had passed testing before being shipped. Part 11 compliance was handled through checksums used to check if results had been modified.
Test sequence editor used to develop and maintain tests for 1000s of device models
Enabling our client to create test sequences without programming reduced overall development costs by about 50%.
Test sequences and test results were stored in the client’s ERP SQL-compliant database for integration with manufacturing workflow
Modular and common software developed for the test systems reduced the V&V effort during IQ & OQ by allowing testing of the test execution application separate from the individual test sequences.
The automated test system was able to execute each test sequence in three different modes: engineering, service, and production. Each mode has been specifically designed for various departments throughout the manufacturing floor. Typically, the manufacturing engineer would verify the sequence by executing it in engineering mode. Once the test sequence parameters pass, it was then approved for production testing.
During actual product testing, an approved and digitally-signed test sequence is loaded and executed via the test sequencer, designed for automated production. During execution, test results are displayed to the operator and simultaneously pushed to a database. The automated test system produces a record for each tested device, indicating the disposition of each test step and the overall performance of the device. All result data are digitally signed and protected from tampering.
The architecture of the test system follows a typical client – server model.
All client stations communicate with a central ERP and SQL server and each computer is secured by applying operating system security. The SQL server contains all of the test definitions, device history records and results. Information from it can be queried at any time by quality engineers throughout the organization, assuming they have proper login access. This provides real time status about products ready for shipment. Also, other than the software running on the client stations, no other user has permission to write or modify any information in this database. The client is able to keep the server in a protected area separating it from the manufacturing environment while the client test stations are placed throughout the manufacturing area.
Surprisingly, there were only twelve test steps needed to uniquely configure and be combined to create sequences to test well over 2000 unique models. Test steps are capable of measuring basic resistance, current and voltage parameters as well as perform sound quality measurements and high speed digital waveform analysis. Several tests were designed to be subjective while others are fully automated and test to a specified acceptable tolerance. During configuration, each test step requires the manufacturing engineer to enter expected values and tolerance limits to define pass – fail status. Upon testing, the devices are attached to a generic interface connection box and the test system makes the appropriate connections and measurements.
Low-level measurement drivers to interface to a DMM, signal generator, switches, and data acquisition cards.
Measurement-based test steps
Test sequence execution
Test sequence management
User access management
Test report creation and management
Verification of test sequence content and ability of user to execute
Verification of the content of the test results
NI PXI chassis and controller
NI PXI acquisition cards for analog measurements
NI PXI acquisition cards for digital input and output
NI PXI DMM for precision voltage and resistance measurements
Using NI PXI and LabVIEW as a common architecture for multiple test systems testing several subassemblies
Client: a manufacturer of automated blood analysis machines
Our client was embarking on a complete redesign of their flagship automated in-vitro Class 1 blood diagnostic machine. In order to meet schedule goals, the design and build of several automated test systems needed to occur in parallel with the overall machine. In a major design paradigm shift, many components of the machine were being manufactured as modular subassemblies, every one of which was an electro-mechanical device. Thus, multiple testers were required to test each of the specific subassemblies in the machine. And, since this was a medical device, the testers needed to comply to 21 CFR Part 820 and Part 11.
With a looming deadline, the testers needed a common architecture, so that all testers could leverage the development from the others. Since each subassembly could be tested independently of the overall machine prior to final assembly, the design of the testers was based on a common measurement and reporting architecture, written in LabVIEW, that interfaced to the customers Part 11 compliant database for testing procedures and measurement results. Furthermore, procedures and validation checks for calibration of the testers were part of the overall test architecture.
Modularization of the test system architecture aided development and maintenance
Reduced overall development costs due to standardization of test sequence steps and reporting
Both test sequences and test results were stored in a managed database that satisfied 21 CFR Part 11 requirements
Modular and common software developed for the test systems reduced the V&V effort during IQ & OQ.
Since multiple subassemblies were being tested, with one part-specific test system per part, the automated test systems used as much common hardware as possible to simplify the development effort through common hardware drivers and test steps. Measurements were made with PXI equipment. Test steps and the test executive that executed the test sequence(s) were developed using LabVIEW.
The types of test steps required to verify the proper operation of each subassembly were categorized into basic operations, such as voltage reading, pulse counting, temperature reading, and communications with on-board microcontrollers. The specifics of each measurement could be configured for each of these measurement types so that each test step accommodated the needs of the specifics of each subassembly. For example, one subassembly might have needed to run the pulse counting for 2 seconds to accumulate enough pulses for accurate RPM calculation while another subassembly might have only needed 0.5 seconds to accomplish that calculation.
The configuration of a test step algorithm was accomplished via an XML description. The accumulation of these XML descriptions of each test step defined the test sequence run on that specific subassembly.
Test results were associated with these test sequences by completing the entries initially left blank in the test sequence, so that all results were explicitly bound to the test sequence.
The operator user interface distinguished between released and unreleased test sequences. With unreleased test sequences, engineers could try the most recent subassembly designs without needing to wait for final validation. The released sequences were only available to test operators. This login-driven branching was managed using the Windows login, so that the client employees could use their company badge-driven login process. Once logged in, the user would be able to execute the test sequence in automated mode, where all steps happen automatically, or manual mode, where one step could be operated at a time.
Furthermore, the Windows environment was locked down using built-in user account group policies to designate the level at which a user could access Windows or be locked into accessing only the test application.
During the V&V effort, each test sequence was verified for expected operation, against both known good and bad parts. Once verified, the sequence was validated against the requirements and, when assured to be as expected, a checksum was applied to the resulting XML test sequence file and all was saved in a Part 11 compliant database. Upon retrieval, when ready to run a test, the sequence was checked against this checksum to assure that a sequence had not been tampered.
Test results, saved as XML in the same file format as the test sequence, were also surrounded by a checksum to verify that no tampering had occurred.
The IQ/OQ efforts were handled in a traditional manner with the client developing the IQ/OQ documentation, with our assistance, and then executing these procedures, again with our assistance.
Low-level measurement drivers
Measurement-based test steps
Test sequence execution
Test sequence management
User access management
Test report creation and management
Verification of test sequence content and ability of user to execute
Verification of the content of the test results
PXI chassis and controller
PXI acquisition cards for analog measurements
PXI acquisition cards for digital input and output
The client already had a test system in place, but it was old and was becoming unmaintainable. Increasing demands from the test engineers and the old software architecture not lending itself to clean implementation of these new features (new sequencer capabilities and ECU CAN communication) drove the need for a rewrite of the software application.
The updated product validation tester supports product validation of the UUT by automating long tests (sometimes a week or more) providing the desired set point control, allowing the client to prove more obviously that their part met the stated specification. Viewpoint developed the software and the client selected the hardware.
Automate long duration tests
Improved operator UX by making controls and indicators more intuitive to the user as well as providing additional capability within one application.
Acquire ECU data along with measured UUT data to allow for engineering performance characterization analysis
Playback utility enables the Test Engineer to quickly view collected data to chart out a path forward for further testing.
Automate a Design of Experiments matrix of conditions, through new sequencer capabilities, to more quickly arrive at product characterization parameters.
All collected signals are now housed in one TDMS file instead of multiple files from different applications.
The UUT is a complete engine with a focus on one of the mechanical subsystems. Data is collected on over 100 channels, measuring temperature, vibration, strain, RPM, position and pressure. Engine management data (e.g., component location, pressures, engine speed, and status flags) is collected via CAN. The engine speed is set via an analog output, and subsystem setpoints are sent to the ECU via CAN. SCXI still used on some of the old test stands, but is being phased out in favor of cDAQ. The test system software was developed in LabVIEW.
Online Monitoring of Industrial Equipment using NI CompactRIO
Improving Maintenance of expensive industrial equipment
Client – Large Industrial Equipment Manufacturer
The maintenance of the equipment was not always done at the prescribed intervals because the cost of shutting down the plant is significant. This sometimes resulted in an equipment failure. This particular application is for equipment/machinery in the energy/power industry (a generator).
The online monitoring system monitors a particular parameter of interest to send warnings and alarms to the control room so that the operators know when maintenance needs to be performed on the particular part of interest. This system has been installed in multiple plants.
Enables condition-influenced maintenance intervals vs periodic intervals
Reduces probability of catastrophic failure by providing warning indicator
The system monitors the generator collector health. NI-based data acquisition hardware acquires the signal of interest, logs the raw data, processes the parameter of interest, and triggers/sends warnings and alarms to the control room. LabVIEW FPGA was used for analog and digital IO and a sensor check. LabVIEW Real Time was used for the calculation, data logging, serving data to the HMI and alarm/warning checking.
Touchscreen GUI for data/alarm display and system configuration
An automated system permits faster validation, unattended test, an increase in throughput, and can free up resources for other tasks during the weeks long endurance test.
Client – A manufacturer of aircraft components in the mil-aero industry
New product development drove the need for a new endurance test system for product validation. The old systems were not designed to test the newly designed part (aircraft actuators), and the company didn’t have the time or resources to reconfigure existing systems to perform the testing required.
The new PXI-based endurance test system provides automated electromechanical testing, full data recording, report generation and a diagnostic panel for intelligent debug. Viewpoint selected the NI equipment, while the test consoles, and other components were selected and fabricated by the customer.
An automated system permits faster validation, unattended test, an increase in throughput, and can free up resources for other tasks during the weeks long endurance test.
Full data recording with a data viewer enables post analysis, which provides the ability to review and analyze raw signals captured during execution. Channel examples are actuator LVDT position, load, current, and encoder actuator position.
Summary report capability allows the customer to document the amount of testing completed against the full endurance test schedules.
A manual diagnostic operational panel provides the ability to verify particular DUT functionality or components without running an entire schedule.
Systems can be paused and restarted to allow for “scheduled maintenance” of the DUT such as inspections, lubrication, etc.
The PXI-based endurance test system enables data collection, deterministic PID Loop Control, emergency shutdown and a diagnostic panel for manual test and debug operation. The system runs endurance test schedules, that are defined as a recipe for test execution. These schedules, which are customer-defined and DUT-specific, are designed to simulate the actual conditions the DUT would see in real world application as closely as possible. LabVIEW-RT was used for the deterministic looping for Closed Loop Control of Actuator Position and Load Control. LVDT demodulation was performed on a PXI FPGA card programmed with LabVIEW FPGA.
Full Data Collection for Real-Time and Post Analysis
Deterministic PID Loop Control
Diagnostics Panel for Manual Test and Debug
Endurance Test Schedule Execution
Hydraulic Control Panel for Source & Load PSI Control
Ability to run tests unattended and overnight reduces operator labor and compresses test schedules
Client – Major Aerospace Component Supplier / Manufacturer
The client had an older VB & PLC-based test system in place already, but it was obsolete. A new endurance test system needed to be developed to validate prototyped components (in this case, aircraft & aerospace bearings). Many of the prototypes are one-off, so it was important that the test system not destroy the component.
A new endurance test system was developed to validate prototyped components. The test system can be configured for automatic shutdowns so as not to destroy the component under test in the event of unexpected performance of electro-mechanical subsystem components. The updated endurance tester supports product validation by allowing the product to run under various test conditions (e.g. speed, load, oil flow, temperature) and collecting data for analysis.
Viewpoint developed the software and selected the NI hardware (other hardware was selected by the client).
Ability to run tests unattended and overnight eases operator labor and compresses test schedules
Data collection allows for offline engineering analysis
Automatic shutdowns reduce destruction of the prototype component under test
The updated cRIO-based endurance tester incorporates configurable profiles, data logging, and automatic shutdown to allow for safer extended validation testing. LabVIEW FPGA and LabVIEW RT were used together to interface with the test hardware sensors and controls. LabVIEW as used create the HMI for the test system.
Closed loop control of bearing test oil flow
Axial load control
Driver for Emerson VFD
E-Stop and safety management (shutdowns based on alarm limits)
Data collection – temperature, pressure, flow, vibration, frequency
Multiple International Deployments Helps Prove Product Meets Spec.
Each endurance test can run upwards of 6 months.
Client: Major Automotive Component Supplier
A new endurance test system was developed to give more precision in the control setpoint. This additional precision enabled potential clients to review the product performance in real-life situations. Each endurance test can run upwards of 6 months.
The updated endurance tester supports product validation by providing the desired parameter control method, allowing the client to prove more obviously that their part met the stated specification.
Viewpoint developed the software and selected the NI hardware for the first unit. The client is now deploying copies of this system to multiple international manufacturing plants.
Able to prove meeting a particular product specification of interest
Closed loop parameter control
Emergency shutdown functionality
The cRIO-based endurance tester provides closed loop control, data collection, and alarming with controlled and emergency shutdown functions. The operator can manually configure a test or load a saved configuration. After a manual operator check to make sure the setup is operating correctly, a successful test will run its full duration and stop on its own.
Creating an N-Up Tester to handle increased production volume demands
Enhanced throughput offers ROI payback period of less than 1 year
Automotive Components Supplier / Manufacturer
The company makes automotive components in very large volume, several part models each at more than 1 million per year.
The client’s primary concern was conserving floor space. They were completely out of spare manufacturing space.
Viewpoint created an N-up NI PXI-based Manufacturing Test System. In this case, N=6 because analysis showed that a 6-up electronic part tester allowed the test operator to cover the test time with the load/unload time.
At the high volumes needed, the client needed to parallelize as much as possible. The cost of 6 sets of test equipment and device sockets was less important than speed. Using the equation:
ProfitPerUnit x NumberAdditionalPartsPerYearAfterParallelizing > CostOfTestEquipment,
being able to completely parallelize made the number of extra units per year large enough that the payback time for completely duplicating the measurement instrumentation for each UUT socket was less than about 1 year.
Paid for itself in less than 1 year by the enhanced throughput.
This approach consumed about 20% the floor space that would have been used for duplicating the test system 5 more times (for a total of 6 testers)
Viewpoint developed an NI TestStand application that ran 6 instances of the test sequence independently of each other utilizing the duplicated PXI-based test equipment. The common parts of the overall master sequence were:
Startup check for the entire test stand
Shutdown of the entire test stand
Archiving the test results into the database
Part handling was managed by a PLC and robot which delivered the parts from a tray into the UUT sockets. Digital bits were used for signaling the test sequence which parts were present in their sockets and ready to test.
Reduced test time across several products by an average of ~25% and reduced time to create paperwork by ~3x
Manufacturer of high-voltage power supplies
The client already had an existing manufacturing test system in place. They wanted Viewpoint to enhance the tester due to an increase in production volume demand. Viewpoint reviewed the existing test system and noted 3 areas for improvement:
Automation available in the measurement instruments – most of the test equipment was automatable, via some combination of serial, GPIB, or Ethernet interfaces. Furthermore, some equipment, such as an oscilloscope, had the ability to store and recall setup configurations. The test operators already used these configurations to decrease setup time for the next test step. Most test equipment did not have automated setup.
Operator time spent on each test step – the client had been through a Lean assessment and had already done a good job of timing operations. However, we specifically noted that the operator was manually connecting to the test points and manually transcribing to paper the measurement results from instrument displays.
Automating the connections – many types of product models were being tested at this test system. Connecting the test equipment to all sorts of products would require either 1) many types of test harnesses and connectors or 2) a redesign of the products to make test connections simpler and quicker.
The enhanced automated test system included automation of instrumentation interfaces, a test executive to run the test sequences, automated test report generation, and automated test data archiving for the electronic UUT.
Reduced total test time across several products by an average of ~25%.
Time to create paperwork was reduced by ~2/3 due to automated data collection.
The enhanced test system included the following updates:
Test sequence automation
Automated test report generation
Automated test data archiving
Automation of instrumentation interfaces
Configurable automated test steps associated with each type of measurement instrument. The test operators would create a sequence of steps to setup each instrument and record the resulting measurement. The sequence of steps could be saved and recalled for each product to be tested, so the instruments could be used automatically.
New programmable meter – integrated the new DMM meter with a programmable interface to replace the one that was not automatable.
Foot switch integration – Since the connections to the test points were manual, a foot switch allowed the operator to take the measurement and advance to the next step.
The StepWise test executive platform managed the multiple test procedures created for the different products. StepWise also handled creation of HTML reports for every part tested.
It did not provide ability for unattended operation
The thermal control had to be set manually
They wanted to do less manual review of the data
The client develops mission-critical products, so there’s a desire to reduce manual operations because they have to explain any anomalies, and manual operations are typically more error-prone. They needed repeatable results that they could trust.
Viewpoint developed a new test system that utilized new hardware and software, augmented by existing low level hardware and firmware. The test system was developed to perform both functional test for production and environmental testing, and was designed to handle up to 4 DUTs at once. The test system utilizes the StepWise test executive software with custom test steps, which allowed the client to create their own highly configurable test sequences. The system was developed in two phases, with the second phase adding support for a FPGA expansion backplane (NI CompactRIO chassis) in order to provide future capability for bringing some of the microcontroller sequence activity into the NI space. In addition, the previous version had a mix of serial, TTL, and USB instrumentation, which was not as robust as Ethernet based instrumentation. Phase II involved upgrading to all Ethernet based instrumentation, and did away with the original test system’s many manual toggle switches that could be used instead of the programmable mode through the SW.
~40% test time reduction per unit
~25% reduction in anomalies that needed to be justified
A manufacturer of large industrial mission-critical equipment in the electrical energy / power industry.
Our client had three main goals in mind. They wanted to:
Decrease unanticipated downtime and maintenance expenses
Provide a more complete picture of machine operation and state
Improve equipment usage tracking.
The solution is a multi-node (i.e. multi-site) remote monitoring system that utilizes an NI cRIO-based controller with customized NI InsightCM monitoring software.
Monitors vibration signals to predict expensive equipment failures
Monitors current machine state via Modbus from other equipment in the system, including the primary system controller
Provides alerts via email when any designated parameter is out of range
The remote monitoring system monitors equipment condition by taking several vibration signal measurements along with reading over 500 Modbus registers. Local InsightCM vibration analysis on the cRIO extracts key features from the accelerometer data. Limit detection is run on these features and other equipment state and alarms are triggered when data is out of bounds. Information collected at multiple sites is sent to a central location either at periodic intervals or based on an alarm condition.
NI InsightCM software
Modbus register configuration & reading
Dead banding-style register data collection to decrease amount of data captured and transferred
Dynamic signal data capture
Data transfer scheduling
Semi-real-time alarm channel display
NI IEPE Analog Input Module
Microsoft Windows Server to host the NI InsightCM server software
Our client produces welding consumables. These products are inspected for continuous improvement of product performance. Our client wanted to standardize their data collection method to improve product quality and utilize SPC (statistical process control) across multiple international manufacturing facilities.
The solution is a relatively straightforward data acquisition system measuring force, vibration and voltage for comparison across multiple international manufacturing facilities to support continuous improvement of product performance.
Standardization of data collection across multiple manufacturing sites
Ability to check product performance tolerances, which could trigger root cause analysis
Ability to analyze data across product runs and across sites for SPC
The system utilizes off-the-shelf data acquisition hardware from National Instruments along with custom LabVIEW code to perform force and vibration measurement and basic calculations such as RMS Min and Max. Each test generates an MS Word file showing summary data as well as graphs of each attribute over time. In addition, the program creates (and automatically archives) a complete data set of all data recorded during the trial and finally adds a line with all the summary results and comments to a Master log file. This Master log file can then be sorted by date, wire type, diameter, or any other input for analysis.
Utilize laser energy to heat thermoplastic or thermoset composite during an automated fiber placement manufacturing process.
Starting from a proof of concept developed by Automated Dynamics, Viewpoint developed the industrial embedded laser controller software for the automated fiber placement manufacturing equipment. The hardware utilized was an off-the-shelf CompactRIO controller from National Instruments.
Quantum produces manufacturing machine components that are used in the glass bottle forming process. Specifically, they supply plunger mechanisms that are used in the initial blank side formation of the glass bottle.
The engineers at Quantum recognized that they had an opportunity to improve the bottle formation process by adding position sensing to their plunger mechanisms. The ability to sense and record plunger positions would enable machine operators to monitor the travel of the Quantum plunger into the molten glass gob within the blank side mold, identify and diagnose potential hardware problems, and provide real-time feedback that could be used to better control the process.
Quantum needed a partner to implement real-time control and monitoring of the bottle forming process and selected Viewpoint for the task.
Viewpoint developed custom monitoring and control software that runs on off-the-shelf hardware. The software developed for Quantum is called TFA™ (Total Forming Analysis). The TFA™ software is a process monitor and control system for the hot side of the bottle forming process.
The software takes position information from the plungers Quantum supplies to the factories to show the travel of the tube during the forming process. The software measures key aspects of the plunger position profile such as initial plunger load position, final position, and dwell time at the final position. When these measurements are found to be out of tolerance, the software communicates with the machine auto-reject system to ensure that bad bottles are removed from the system.
Moreover, the final plunger position is used as feedback to do closed loop control of the glass gob weight, controlling glass feeder tube height and/or needle heights to change the glass gob weight. This allows for precise control of container weight, making the most efficient use of raw materials while ensuring container quality.
To accommodate multiple end-customer-driven hardware configurations, the off-the-shelf hardware selected was based on the National Instruments CompactRIO family of chassis to enable configuration of various input/output signal requirements.
For the end result, check out one of the machines running TFA™ in action:
Hardware Customization Flexibility – every one of Quantum’s customers wants something either a little or a lot different with their particular instance of the system. Using modular hardware allowed for swapping of I/O hardware.
Quick Response to Software Feature Requests – Quantum and Viewpoint were in constant communication to be able to implement new features and tweaks on fairly short notice (generally within a couple of weeks).
On-Site Support – Viewpoint engineers travel to Quantum’s customer sites with them as a team upon request.
The embedded process monitoring and control system consists of custom process monitoring and control software that runs on off-the-shelf hardware.
NI 9148 Ethernet expansion chassis
NI 9201 module for AI
NI 9425 module for DI
NI 9476 module for DO
Data Acquisition and Processing
Waveform Calculations (eg. final position and dwell time)
Final Position control loop
Real-time per cavity plunger position graphs
Process trend graphs
Forming history graphs, showing a packet of the last forty final positions per cavity
Limits definition screens
System health summary, fault monitoring and auto-reject configuration
Plunger sensor calibration
Gb Ethernet communication with the DAQ devices (NI 9148 chassis)
TCP/IP Modbus communication with Schneider Electric motors for feeder tube and/or needle control
TFA™ is a registered trademark of Quantum Engineered Products, Inc.
Industrial Embedded – Using a cRIO for Rapid proof-of-concept Prototyping
FPGA-based motor control & RT-based loop control.
The NI cRIO platform allowed for rapid development/test cycles. There was as little as ~1.5 hours between a software change and a test.
This was a rapid proof-of-concept prototyping effort to quickly determine feasibility of auto-pilot flight.
The cRIO-based controller was able to allow the helicopter to auto-pilot routed waypoints.
The NI cRIO platform allowed for rapid development/test cycles. There was as little as ~an hour and a half time between a software change and flight test. Code updates could be flight tested in the morning, updated over lunch, tested again in the afternoon, updated one more time at night, and flown again the next morning. This allowed for rapid development of control laws.
The core system functionality consists of:
resolver-based BLDC motor control
position loop control
vehicle dynamics control
and flight logging.
Vehicle dynamics control and position control lived on the RT processor, while motor control and critical high-speed processing lived on the FPGA.
Designing an Automated Fuel Cell Validation Test Stand
Verifying a New Fuel Cell Design Through Automated Operation
Client: A major automotive manufacturer
Micro Instrument, an automation vendor that builds test and validation stands, has extensive experience with programmable logic controllers (PLCs) and stand-alone controllers for controlling repetitive motion, safeties, and other “environmental” parameters such as pressure and temperature. The company typically uses PLCs to reliably deliver discrete I/O control and standard PID loop control.
However, Micro Instrument’s customer, a major automotive company, was interested in investigating fuel cells as a power source and they needed to run these fuel cells under a wide range of conditions for extended durations, for both design validation testing and durability testing purposes. Furthermore, the client wanted to implement more advanced control algorithms than simple PID.
The customer knew they needed control loops that predicted system response so we could eliminate overshoot and/or achieve a faster approach to a setpoint. But, because the customer did not know in advance exactly what such “smart” controls would entail, it was beneficial to have the full power of LabVIEW to develop such controls. Providing this functionality with a PLC would be cumbersome, if not impossible.
The customer had some Compact FieldPoint which they wanted to use for this project, so we needed to ensure that this equipment would be sufficient to deliver the required control performance and tolerances. Also, the system needed to conduct PID control in two forms – PWM and continuous control. Importantly, this Fieldpoint hardware had a real-time controller running LabVIEW Real-Time.
We developed a flexible control environment using NI Compact FieldPoint and LabVIEW Real-Time to meet the customer’s system control demands. For example, to predict system response, we programmed the Compact FieldPoint to run control loops that were aware of imminent system-state changes and changed their control schemes accordingly.
As with most validation test systems, we needed to monitor conditions for safety. New product designs are often operated near the edges of safe operation in order for the designer to understand how the product performs in extreme conditions. For this fuel cell application, destructive over-heating and over-pressure could occur. Both digital and analog signals were watched in real-time to assure operation within reasonable bounds and allow a safe shutdown if the fuel cell ran into out-of-bound conditions.
The application used the following independent parallel loops:
Seven for PWM-based temperatures control
Two for continuous pressure monitoring
Four for solenoid and sensor monitoring and control
15 safety loops
Data collected during the validation tests were saved to a local PC for later performance analysis and anomaly detection.
The combination of Compact FieldPoint with LabVIEW Real-Time enabled the customer to run the required custom control algorithms and it surpassed the capabilities offered by standard PLCs.
Client: Allied Reliability Group – A best-in-industry maintenance & reliability services company.
Improve route effectiveness
We developed data collector software that interfaces to the multi-technology compatible data collector hardware, Allied’s iReliability™ maintenance reliability software, and InsightCM, in order to provide a user interface that guides the route-based collection of data and stores that data on Allied’s cloud-based server for analysis. This custom solution is expected to be utilized on a daily basis in hundreds of facilities around the world, helping Allied provide its customers with a cost-efficient and scalable Condition Monitoring program.
Improve route efficiency by guiding the maintenance operator through the route-based collection process
Provide better managed data via route status reporting that is accurate and delivered in a timely manner
Reduce data collection errors by improving data collection automation as well as performing data quality checks during data collection
Improve understanding of events/alarm conditions by providing additional data collection when particular criteria are met
Integrate multiple Condition Monitoring technologies with a single piece of hardware and a consistent software platform
Client: A major manufacturer of data-critical three-phase uninterruptable power supplies
A major manufacturer of very large three-phase uninterruptible power supplies (UPSs) needed better measurement, analysis, and report generation capabilities. Their clients used these UPSs on mission critical equipment, such as data warehouse server farms, communications equipment, and so one. Existing testing procedures used equipment that did not allow for complete simultaneous coverage of all sections of a UPS unit, from input to output. Our client wanted a better understanding of the signals on each of the three phases at various locations within the UPS, especially when power sources were switched or faults were induced.
Also, in the prior test procedure, factory acceptance reports were manually assembled for our client’s end-customers, delaying the final sign-off. Finally, since the end-customer might want to run a specially configured test or run a series of tests in a different sequence than some other end-customer, our client wanted to be able to rerun certain types of tests or run tests in a customer-specific order. Thus, the test sequencing needed to be flexible and editable, possibly on the fly.
Finally, synchronization between the data collection on all signals was critical to assess functionality, since all 3-phases of the UPS output needed to be in the proper timing relationship.
At a high-level, the majority of testing a UPS relies on knowing the reaction of the UPS to changes on the input side (such as a grid power outage) and changes on the output side (such as an immediate heavy load). Thus, many of the tests performed on a UPS deal with power quality measurements, such as defined by IEEE 519 or IEC 61000 series standards, which cover both continuous and transient operation. The StepWise test execution platform was utilized to allow the customer to develop arbitrary test sequences using the application specific test steps developed for the program.
Our solution used a cRIO to measure both current and voltage from each leg of the 3-phase power (and neutral) by using appropriate cSeries modules connected to various voltage and current test points within the UPS. The cRIO had enough slots to allow a single cRIO to measure a single UPS.
Assessment of continuous operation mainly reviewed the UPS output power quality. Here, it was important to know the amplitude and phase of each leg of the 3-phase power. Synchronous data acquisition between all voltages and current channels was needed for proper timing alignment of collected data points.
Assessment of transient operation was often a review of power ripple and recovery time. For example, in the event of grid power loss, a UPS would switch over to backup power, with the result being a small transient created on the output a UPS. Again, the voltages and currents needed to be collected synchronously to assure that event timing was aligned.
For increased power capacity, the UPSs could be connected in parallel. When ganged together, the continuous and transient behavior of each UPS needed to be compared to the others, in order to capture the behavior of the entire combined system. Consequently, each cRIO (one per UPS) had to share a clock to enable synchronous data collection across all cRIOs. A timing and synchronization module was placed into each cRIO chassis with one cRIO acting as the master clock source and the others being slaved to that clock.
The overall test system architecture has a master PC communicating with each cRIO. Each cRIO was placed in certain activity states by the master PC, such as “arm for measurement”, “transfer collected data”, and “respond with system health”. This arrangement enables the number of cRIO to shrink or grow depending on the number of UPSs being testing in parallel.
The test system connected the timing module in each cRIO in a daisy-chained configuration, leading to data sampling synchronization error of less than 100 ns between all cRIOs, which translates to about +/-0.001 degree phase error for 60 Hz power signals. This timing synchronization was more than sufficient to analyze the collected waveform data for power quality and transient structure.
LabVIEW was used to create various configurable test steps that could be executed in random order as well as in an automated sequential manner. Our client was thus able to test a UPS in a predefined manner as well as react rapidly to queries from their customer when they were viewing a factory run-off test. For example, the customer might ask to re-run the same test several times in a row to validate consistent responses.
Each type of test included automated analysis routines that numerically calculated the relevant parameters against which the UPS was being checked. Not only was this automated calculation faster, but it reduced mistakes and improved reproducibility as compared to the previous post-testing partially manual calculations.
Data from all tests, even repeated ones, on a given UPS were archived for quality control purposes and made a part of the device history for that UPS.
Finally, the report generation capability built into this test system was far superior to the previous methodology by allowing our client to hand their customer a professional report package practically immediately the testing was complete. Customer satisfaction was improved substantially with this state-of-the-art test system.
Client: A major manufacturer of implantable cardiac and neural stimulators
Our client needed several extremely reliable test systems to test the batteries that power their implantable medical devices. These new test systems were needed for two main reasons. First, the needed to upgrade existing obsolete test equipment, based on antiquated hardware and software. Second, new battery designs could not be tested on the old equipment.
A critical aspect of the new test system was the need to detect any excessive charge being extracted from the battery, thus rendering it unsuitable for surgical implantation. Thus, the test system needed to monitor the total energy withdrawn from a battery during testing to assure that it never exceeded a certain limit while also offering precise control of the type of pulses being drained from a battery.
All test results had to be stored in a database in order to maintain device history for each battery manufactured for archiving, quality control, and process improvements.
The updated manufacturing test system is PXI-based along with a custom micro-controller-based circuit board for some low-level control. Each PXI controller communicated to the microcontroller (uC) on the custom PCB via CAN. The uC controlled the current drain from the battery while monitoring actual current and voltage from the battery at over 1000 samples per second using a precision 6.5 digit PXI DMM. Additionally, each PXI chassis was used to test many hundreds of batteries. Signal connections were handled by several switch multiplexers. Overall control of all the PXI testers was managed by a host PC connected to the PXI controller.
Reduced test system cost vs complete COTS solution with combo LabVIEW RT on PXI and firmware on microcontroller-based custom circuit board
Enabled tight control of DUT operation on controller with microsecond level responsiveness while being supervised by higher-level PXI RT
Quick-reaction test abort capability
Test results stored to database for archiving, quality control, and process improvements
In a simplified view, the testing proceeded by pulsing the battery with a series of different durations and varying amperages. The exact sequence of this pulsing is unique for each DUT model. Measurements were made using a PXI filled with various NI boards such as DMMs, for accuracy, and data acquisition cards, for general purpose use.
Additionally, the pulsing amperage levels needed to be tightly controlled in order to know that the tests have been performed properly. Thus, a real-time amperage control scheme had to be implemented to maintain the level requested for the pulse. We chose to accomplish this control via an analog control circuit developed using a custom Viewpoint-developed circuit board. This board was controlled via a Microchip PIC microprocessor. The LabVIEW RT application communicated with the microcontroller to setup the pulsing sequence and coordinate the start and stop of the pulsing and the NI acquisition hardware.
This custom circuitry also reduced the overall cost of the test system by about 40%.
The engineering time to design this custom circuitry was more than offset by the reduction in material costs because more than 10 test systems were deployed, allowing the non-recurring engineering effort to be shared between many systems.
When no critical issues were detected, the waveforms acquired by the PXI system were stored and then analyzed to determine the viability of the DUT. The pass/fail disposition, the waveforms, the total energy consumed, and other test results were then passed along to a master PC that managed all these results in a database for archiving, quality control, and process improvements, each set of results being tied to the unique unit serial number.
The test systems provided reliable operation for testing the large annual production volumes of the mission-critical DUTs.
LabVIEW RT – for managing the microcontroller functions and overall data collection and safety monitoring
Microcontroller application – to provide precision pulsing of the batteries
Communicate to the host PC – to both receive pulsing instructions and configurations and to return pulse waveforms for each battery tested.
Condition Monitoring – Improving the Uptime of Industrial Equipment
Monitoring the Health of Industrial Equipment
Client: A large industrial company that uses industrial-grade compressors.
Increase awareness of potentially harmful operating conditions.
Record detailed data upon event detection.
Reduce unnecessary equipment shutdowns due to spurious vibration transients.
We utilized an off-the-shelf controller (NI cRIO) combined with custom software in order to augment and create the first system with ~2 man-months of effort. This solution has been installed in several facilities and is projected to be installed in hundreds of facilities around the world.
Send alerts via email when potentially harmful operating conditions occur.
Record detailed data upon event detection for failure analysis and predictive maintenance.
Suppress spurious vibration transient signals to reduce unnecessary equipment shutdowns.
For this application, Dresser-Rand needed an extensible system capable of monitoring numerous signals interfaced to a large gas turbine. Well over a
thousand signals needed to be collected from an extremely varied set of data acquisition devices and instruments. The configuration of this system and
viewing of data needed to be available from any of a number of computers connected to the data acquisition network. Also, data needed to be available for additional processing on other connected networks. Dresser-Rand required that all of the components that were necessary to run a test, such as the server, database, acquisition, configuration, and viewing, were able to be run on one computer or distributed over several computers.
This system utilizes Client-Server architecture to acquire signals from a variety of devices and logs the data to a central SQL Server database. The data is then processed and viewed on remote terminals. It is modularly designed to facilitate changes in acquisition hardware as well as viewing and processing software. There are three important components to this application: a SQL Server data management system, TCP/IP packet based messages for configuration and data, and a flexible, applicationindependent driver model.
National Instrument’s LabVIEW was used for the bulk of this project. C, Visual Basic, and Fortran were also used to develop analysis routines and interface with various pieces of hardware.
TCP/IP packet based messages for communication of data and commands
100base-T local network with bridge to other company/worldwide networks
Remote configuration and viewing
SQL Server database
High channel count (1000+ signals)
Flexible data acquisition system
Diverse data acquisition devices: DAQ, GPIB, VXI, RS-232, PLC
Common driver model – drop in drivers, self-aware configuration
Common calculation model – drop in calculations, self-aware configuration
Flexible GUIs with drop in screens
Several software technologies used for various aspects of the project: LabVIEW, Microsoft SQL Server, Microsoft PowerStation Fortran, Microsoft Visual Basic, Microsoft C, Microsoft Access
Condition Monitoring for Electric Power Generation
Monitoring generator and turbine components of power generation equipment
The CompactRIO-based system has allowed for continuous monitoring, rather than just a periodic review of turbine and generator performance. In addition, by combining the FPGA and the RT processor in a physically small device, the solution has been able to ensure very fast data acquisition, data reduction, and sophisticated analysis.
Client: A multi-national power generation equipment manufacturer
Continuous monitoring of power generation equipment can have a great impact on maintaining a reliable flow of power to consumers as well as alerting the power generation equipment operator to potential equipment damage if timely repairs are not made.
This case study will focus on two measurement systems utilized by a multi-national power generation equipment manufacturer to monitor the generator and turbine components of their power generation equipment.
The manufacturer’s systems needed relatively high-speed waveform sampling, well-suited to the National Instruments CompactRIO platform. Viewpoint Systems provided technical assistance in the development of these systems.
The difference in the types of analyses and data rates of the measurement systems required a flexible yet capable hardware platform. Each system needed to work on a generator outputting 50 Hz AC or 60 Hz AC.
The CompactRIO platform and LabVIEW proved to be an excellent solution for the electric power generation condition monitoring system’s data acquisition and analysis needs. The small size and robustness of CompactRIO allowed the system to be placed at a preferred location. In both the flux probe and the blade tip timing, the CompactRIO FPGA could acquire and pre-process the data. The CompactRIO successfully managed – and continues to manage – all analysis, data archiving, and communication with a host PC.
In the case of the tip timing, the data rates were high enough that the detection of the tip location for each signal needed to be performed in the FPGA so that the real-time (RT) layer received a much-reduced data rate of tip locations. The RT processor was able to perform higher level analyses on these timings. Occasionally, a snapshot of a raw tip timing waveform could be passed to the RT processor for archiving and presentation to an engineer. However, due to the data bandwidth and processor loading of the CompactRIO, such snapshots must be infrequent.
For both systems, a master PC managed the operator user interface, long-term data collating, reporting, and archiving of files and statistics. Each CompactRIO connected to this master PC via a TCP/IP connection.
The CompactRIO-based system has allowed for continuous monitoring, rather than just a periodic review of turbine and generator performance. In addition, by combining the FPGA and the RT processor in a physically small device, the solution has been able to ensure very fast data acquisition, data reduction, and sophisticated analysis. By deploying CompactRIO devices, the multi-national power generation equipment manufacturer achieved a cost-effective method of monitoring the power generation facility equipment, ensuring detection of operational issues quickly and easily.
Both measurement systems described required sampling rates greater than 10 kHz, restricting the use of traditional PLC-based data acquisition devices and requiring a programmable automation controller (PAC). Each system measured the performance by connecting to special sensors and associated signal conditioning, provided by our customer, such that the data acquisition equipment only needed to support ±10 V signals. Furthermore, each of these systems needed to push data to a master PC for data trending, result archiving, and operator display.
Despite the significant differences in the measurement types, Viewpoint Systems was able to utilize a common set of data acquisition, processing, and connectivity tools, based on the NI CompactRIO platform and LabVIEW, to monitor the system.
More information about each measurement system follows.
The flux probe system looks for shorts in the windings of the generator. Each time a winding passes under the flux probe, the probe output increases. When a winding is shorted, the field created by the winding is reduced and detected as a lower amplitude output by the flux probe. The position of a shorted winding inside the generator can be located by measuring a key-phasor signal that pulses once per revolution and converting the timing offset of this weakened signal into an angular position. Both flux and key-phasor signals are measured at about 50 kS/s.
Figure 1 shows an example signal output by a flux probe. The local peaks are indicative of winding current. Automated analysis of the amplitudes of the flux signals can be challenging due to changing waveform shape as a function of generator load and severity of shorts.
Figure 1 – Example flux signal over a single rotation
The turbine tip timing system looks for displacement of each turbine blade tip from nominal position. At slow rotational speeds, the spacing between each tip closely follows the uniform blade spacing. At higher speeds, vibrations and resonances can make the blade tips wobble slightly, causing small deviations in the timing of the tip passing by a sensor.
A special proximity sensor detects the tip of the turbine blade, and can be based on optical, eddy-current, microwave, and other techniques. Any positional deviations of a tip from nominal give indications about the mechanical forces on the blade as well as compliance of the blade to those forces as the blade ages. Specifically, each blade has natural resonances and compliance, both of which can change if the blade cracks.
A turbine typically contains several stages and each stage contains many blades. See Figure 2 below for an example. The number of tip sensors per stage is variable; if blade twist is measured, at least two sensors are oriented perpendicular to the rotation direction. Also, the acquisition rate from each sensor is fast. For example, consider a stage with 60 blades, the width of each blade occupying about 1/10 the space between adjacent blades, and a generator running at 3600 RPM (60 Hz). The tip sensor would detect a pulse every 1/3600 s, lasting for less than about 1/36000 s, as the blades passed by. Accurate location of the pulse peak or zero-crossing then requires sample rates over 100 kS/s. Because multiple sensors are typically used, tip timing measurement systems can easily generate 10s of MBs of data per second.
Remotely Monitoring Electrical Power Signals with a Single-Board RIO
Electronics Design for sbRIO Mezzanine Card Combines Custom Needs with Flexibility
Client: A designer and manufacturer of leading-edge electrical power monitoring equipment.
Smart Grid investment is growing. Two important premises for Smart Grid design are access to local power sources and an understanding of loads and disturbances on the grid at various locations. These local power sources are typically alternative, such as solar and wind, which have intermittent power levels. Since the levels fluctuate, an important feature of proper Smart Grid operation is handling these erratic supplies. Optimal understanding of these disturbances and load changes increasingly requires measurements on individual AC power cycles.
Local power analysis systems typically have constraints in equipment cost, size, and power usage balanced against the need for simultaneous sampling front-end circuitry and custom data processing algorithms on the back-end. Furthermore, many of these systems are presently deployed as prototypes or short-run productions, requiring a combination of off-the-shelf and custom-designed components.
A custom RIO Mezzanine card was designed and built for the National Instruments Single-Board RIO platform to provide access to simultaneously-sampled signals from the 3-phase and neutral lines of an AC power source. Timing synchronization between physically-separated installations was provided by monitoring GPS timing signals. Custom VIs were developed to retrieve the sampled data points and GPS timing for subsequent processing and analysis.
Figure 1 – Power Line Data Acquisition sbRIO RMC Module with GPS Timing
We needed 8 channels of simultaneously-sampled analog inputs (AI), each capable of sampling at least 50 kHz. These AI channels sample the voltage and current of the neutral and three phase power lines. Furthermore, to coordinate power and load fluctuations across many measurement locations, a world-wide synchronization signal is needed.
The Single-Board RIO (sbRIO) platform from National Instruments offers an excellent balance between off-the-shelf capability and custom design needs in a reasonably small package. The sbRIO provides the processor, memory, and connectivity while the RIO Mezzanine Card (RMC) provides the I/O and signal conditioning needs. See our white paper, Developing Embedded Systems: Comparing Off-the-Shelf to Custom Designs, for a discussion of the benefits of using this approach.
We designed the RMC for the simultaneously-sampled analog inputs and a GPS receiver. The RMC was mounted to a sbRIO-9606. Some design specifications were:
8 analog input channels: simultaneous sampling at 50 kHz, ±10 V range, 16-bit resolution
GPS receiver with Pulse Per Second (PPS) timing signal with 60 ns accuracy
SMA Connector for external GPS active antenna
20 position terminal block for analog inputs and shields, removable for wiring
Operates inside an enclosure with internal conditions -40 to 55 °C temperature
An image of the designed RMC and the sbRIO-9606 is shown below. Since the A/Ds reside on the RMC, the data bytes are accessed by sbRIO FPGA VIs code communicating through an SPI data bus designed into the RMC. The internal real time clock coupled with the GPS PPS signal allowed for timing accuracy within a GPS region well under +/- 1 uS of accuracy for all data sampled no matter the location, internally or from unit to unit within feet or 1000s of miles away.
The combination of the sbRIO off-the-shelf platform and the custom RIO mezzanine card (RMC) for I/O makes a powerful, cost-effective, and yet configurable solution for measurements of AC power signals. With the GPS component on the RMC, measurement units can be placed at dispersed locations while still providing adequate synchronization of acquired waveforms for localizing and understanding disturbances in power transmission and distribution, irrespective of any specific application. If you have an embedded monitoring application that you’d like help with, you can reach out to chat here. If you’d like to learn more about our circuit board design capabilities, go here.
Industrial Embedded – Equipment Control – VAR Compensator
Keeping the Electrical Grid Healthy with VAR Compensation
Modular Embedded System Shortens Development Time and Reduces Risk in Static VAR Compensation System
Client: T-Star Engineering & Technical Services: A manufacturer of electrical power delivery equipment.
The U.S. power grid is a large electrical circuit that, although has some amount of isolation between loads, is certainly interconnected at drop points, which is what customers care about most.
SVCs are generally worth considering in scenarios where large electric motors are being utilized (e.g. mills, recycling plants, mines). Problems such as voltage sag, voltage flicker, and current harmonics can cause reduced motor torque, lights to flicker, and equipment damage.
T-Star has significant domain expertise in stabilizing medium voltage power systems. Viewpoint has significant domain expertise in the realm of measurement and control systems. The team at T-Star needed a well-supported intelligent device for their new generation Static VAR Compensator (SVC). They wanted a highly reliable solution that had minimized the time-to-market and a highly predictable future migration path for higher volume production. They also needed multi-channel precision timing, and high speed logging in a device certified for operation in dirty industrial environments.
Viewpoint was asked to develop the controller for T-Star’s Static VAR Compensator (SVC) using a carefully constructed specification. The chosen controller platform is a National Instruments (NI) Compact RIO due to its modular feature set, networking capabilities, and associated supportability and quality that comes with an industrial-grade off-the-shelf controller. T-Star and Viewpoint have made very complementary GSD (Get Stuff Done) teammates.
As the grid gains intelligence, this class of smart/dynamic power quality system will likely become more critical.
Cabinets for an SVC located at a remote mine in British Columbia
Inside an SVC
The platform supports other future configurations that are outside the phase one scope of this project.
Time-to-market is critical for T-Star. The initial proof of concept was completed in weeks.
The Linux-based OS, well known in the embedded community, provides a rich ecosystem for enhanced usability (e.g. network stack), and real-time operation.
Secure access through VPN with built-in firewall and user account control and permissions allows for remote diagnosis, health monitoring, and gathering of online information.
An FPGA allows for deterministic timing and parallel processing.
With COTS hardware, future upgrades are simplified with code base reuse and recompiling for new hardware.
The NI platform provides a migration path to a lower-cost solution once hardware configurations are locked down and production volumes increase above a certain level.
The NI control hardware is certified (certifications in the domains of CE, FCC, UL, etc.) for marine applications and other challenging environments.
The SVC tunes a highly inductive load by dynamically injecting a variable amount of capacitance due to the measured load. Voltage and current sensors feed a series of control algorithms which determine the voltage and current imbalance in order to inject the appropriate amount of capacitance into the power system. This algorithm acts on a cycle-by-cycle basis. The figure below illustrates the system makeup.
Embedded Control for Industrial Machine – Gear Lapping
VIEWPOINT SYSTEMS IMPROVES GEAR FINISHING USING REAL-TIME EMBEDDED CONTROL SYSTEM WITH NI RIO HARDWARE.
THE GLEASON WORKS’ BEST-IN-CLASS GEAR MANUFACTURING SYSTEMS NOW PRODUCE HIGHER QUALITY GEARS IN 30% LESS TIME
With the embedded control system that Viewpoint created using NI
RIO hardware and LabVIEW FPGA, our customers can increase gear
quality and save cost at the same time.
Mark Strang, Project Engineer, The Gleason Works
The Gleason Works sought to create a dynamic, torque-controlled lapping solution with responsive, realtime feedback to create better quality gears and reduce cycle time for its gear lapping machines.
Viewpoint Systems provided system integration using NI RIO technology and LabVIEW FPGA code for real-time measurement and control.
Gleason Corporation and The Gleason Works create the machines, tooling, processes, services, and technologies needed to produce the bevel and cylindrical gears found virtually everywhere – from automobiles and airplanes to trucks and tractors, and from giant wind turbines that can power a thousand homes to the lawn mowers and power tools found at these homes. Gear tooth surfaces and spacing are never perfectly machined, and consequently, noise and vibration are often present in applications where the gears are later used. Gears, after the typical heat treatment process, are commonly lapped or ground to smooth the gear teeth surfaces and improve operational characteristics. The goal of lapping is to reduce surface and tooth spacing deviations that may produce noisy gear sets.
Gleason machines lap gears in pairs, the mating gear and pinion members rotating together at a high speed with an abrasive lapping slurry applied. After machining and heat treatment, however, the spacing deviations that need to be lapped are at unknown locations on the gears and can show themselves as run-out (i.e., an off-center axis). To further complicate finding the deviations, the run-out is actually composed of multiple orders, likely making the run-out for each order different than the others.
One conventional approach to lapping employs machines with relatively high-inertia spindles to carry the gearset members. At moderate speeds, this configuration can somewhat reduce spacing errors during lapping, but is far from optimal in refining the tooth surfaces. Another approach employs at least one low-inertia spindle. This configuration can refine tooth surfaces well, but tends to increase spacing errors—especially at higher speeds. In both conventional cases, one spindle is operated in a simple constant torque command mode to control lapping force, but the critically important dynamic torque components are left to passive physics.
To get the best of both worlds, Gleason could no longer rely on passive physics, and turned to Viewpoint Systems to help develop and implement an embedded control system that could measure deviations in real-time and apply dynamic corrective torque.
With this new, patent-pending system founded on embedded control and dynamic real-time process monitoring technologies, Gleason and Viewpoint bring exciting new capabilities to a worldwide and well-established gear finishing process. The unprecedented ability to improve gearset quality during lapping, and to do so at higher speeds provides a winning market proposition—one made possible by intelligent application of today’s leading-edge technologies. With its new solutions, Gleason gear manufacturing systems now produce higher quality gears in 30 percent less time. Throughout the process, Gleason appreciated Viewpoint’s expertise and synergy achieved when working together. More than just an implementer, Viewpoint’s experts worked alongside their own to develop new techniques and solutions in an agile and collaborative environment.
Gleason engaged Viewpoint Systems to implement this real-time measurement and control system because of their expertise with the leading reconfigurable I/O (RIO) hardware from National Instruments. Viewpoint used the NI RIO technology and developed LabVIEW FPGA code to create a real-time measurement and control solution for the lapping machine. Viewpoint equipped an NI cRIO-9076 controller with an NI 9411 digital input (DI) module and an NI 9263 analog output (AO) module. The DI module monitors two digital rotational encoders, one on each spindle carrying the bevel gear set members. Innovative analysis of these angular signals can tease out subtle variations in the average rotational speed. Coupled with sophisticated order analysis, these variations are used to modify the torque applied to the gear set at the proper angular positions and with the appropriate amplitude. Thus, the high-frequency dynamic torque components experienced by the gearset during lapping are no longer dominated by passive physics, but are actively controlled to achieve desired results. Viewpoint created the system to manage all of the measurements, analyses, and torque corrections in the RIO FPGA with specific, efficient coding in LabVIEW FPGA using Viewpoint’s FPGA IP toolset. The cRIO controller provides data collection and even data archiving functions to support other advanced post-processing. The controller also provides an API to control the adaptive lapping process from a supervisory application.
Client: A major manufacturer of aircraft landing systems
A major manufacturer of aircraft landing equipment needed to develop a means of endurance and fatigue testing new designs for aircraft steering. The actuators involved in steering the nose landing gear (NLG) required precise and reliable control through thousands of steering cycles.
Control loops needed to be closed at faster than 1 ms.
Prior systems were handled manually without real-time control and monitoring.
Our customer designed and built a test rig to provide the hydraulics and environmental conditions for the endurance testing on the NLG. Viewpoint Systems supplied the electronic data acquisition and control hardware coupled with real-time software to provide the required fast control loops. The configuration and execution of the 1000s of steering cycles were managed by the same data acquisition and control system through a set of configuration screens that allowed specification of turn rates, min/max angles, drive and resistive torque settings, and so on.
The various PID control loop configurations were also configurable along with gain scheduling required under different operating conditions.
The environmental conditions were supported by controlling a temperature chamber through ramp and soak settings occurring during the steering tests.
Measurements on the steering performance were collected from commanded setpoints, sensor readings, and controller outputs during the entire test run.
Alarm and fault conditions, such as force exceedance, were monitored continuously during operation so that the system could safely run unattended.
The entire system underwent an extremely rigorous acceptance testing procedure to verify proper and safe operation.
Arbitrary Load and Position Profiles
Flight Position Control
Load Position/Force Control
Endurance/Flight Schedule Execution
Deterministic RT for DAQ and PID Control
PXI/SCXI Hybrid RT Chassis
Discrete Pump Skid Interface
Custom Control Panel/Console
Prior to deployment of our system, setup of a test was much more manual and operators needed to be around to monitor operation.
With our new system, complete endurance testing could be specified and executed with minimal supervision. Furthermore, the tight integration of real-time control and coordinated data collection made report creation much simpler than before.
The rigorous acceptance test gave trustworthiness to the data and allowed the design engineers to validate performance more quickly than the prior semi-automatic and manual methods of operation.
Setup of tests has been improved from prior operations. The endurance testing itself operated over a huge number of cycles lasting weeks to months between scheduled lubrication and maintenance.
The deployed system measures performance during the entire testing, even between the scheduled downtime.
Shortened Product Development Cycle for Industrial Equipment – A Leak Tester
Leak testing sounds simple. It seems like all you have to do is wait for the pressure (or vacuum) to drop by a detectable amount and estimate the leak rate based on the time it takes to reach that detected decrease. But, in an assembly line, it’s not, mostly due to the desire to do everything as quickly as possible.
Many manufactured products need to be tested for leaks to be sure they hold pressure or vacuum. Examples are fuel cells, braking systems, air bags, air conditioner components, balloon catheters, and so on. The list is almost endless.
Our client needed to develop a new leak tester with lower cost and more sensitivity and same small size.
The manufacturer wants the test to run as fast as possible – production volumes can’t tolerate a long wait to sense a leak. And, don’t forget, these products are not supposed to leak, so the “bad” ones leak very s-l-o-w-l-y. This is where the complexity in performing a leak test in a manufacturing environment arises.
The tester must be super-sensitive – to be able to sense a leak as soon as possible balanced by cost constraints and the need for the tester to be physically small so it fits on the production line
The tester needs to provide a solid user experience – so it needs to be robust and smart so the production test operator can just use it without hassle.
Our client had been building commercial leak testers for many years. Their development tools were built on in-house hardware designs and a software library for making measurements and interacting with the operator via buttons and a display.
The typical development cycle extended to over 1 year and they wanted to shorten that development timeline by utilizing more Commercial Off The Shelf (COTS) components, especially since embedded controllers had improved dramatically over the recent years, and they did not sell a huge number of these specialized product per year. Put another way, they were looking for a supplier that would pay better attention to them since they were competing for responsiveness from say the microcontroller vendors when those vendors were used to customers purchasing 100s of thousands of microcontroller per year. They were ready for a change in design tools and subcomponent vendors.
Another issue was the need for extremely accurate pressure detection (so the leak test could be fast!). They wanted to go with COTS components, but they could not find a COTS detection system (sensor and digitizer combination) with enough accuracy and responsiveness. They needed a hybrid approach of COTS for the controller and custom for the signal conditioning and acquisition I/O.
The prototype unit was designed, built, and ready for testing of the leak testing capability in about 2 months after the requirements were completed. The collapsed single-board solution was then designed, built, and unit tested about 1 ½ months later.
Leak Tester Sub-assembly and enclosure
The custom circuitry combined with some proprietary algorithms executed on the NI SOM RIO was able to measure with about 10 times better sensitivity than the previous generations. This sensitivity translated into faster leak measurement times – a real selling point to our customer’s customers.
The customer was ready to do their validation in about 4 months after we had the green light to build. The ready-to-go VERDI prototyping system was a huge time-saver.
We worked with our client to develop a system based on the NI RIO platform from National Instruments (NI). We initially considered the NI sbRIO but chose the NI SOM RIO (sbRIO-9651) because of its size and slightly lower costs.
After an initial review of the customer’s design goals and requirements, a concerted effort was spent to morph those goals and adjust those requirements by iterative discussion between both of our engineering teams. It was truly a collaborative requirements gathering and design activity. We brought our knowledge of the NI SOM RIO I/O and LabVIEW programmability capabilities and the customer shared their understanding of leak detection and their customer needs.
VERDI – Chassis with modules
Enter VERDI – Once the requirements and initial design were complete, we designed and built a custom A/D and signal conditioning circuit board that could interface with the NI SOM RIO. The initial version of this board was designed to connect to our VERDI prototyping system so we could rapidly validate the performance of the circuitry without needing to build the complete single-board system (with SOM and other circuitry all on one board).
After some tweaking to improve this custom circuitry, we essentially copied the board layout from all the necessary I/O (SOM controller board, digital I/O, display I/O, and the custom leak-detection circuitry onto a single-board system. This effort was reduced, since we had a large amount of ready-to-reuse hardware designs already developed for the VERDI prototyping system.
Industrial Embedded Monitoring – Remote Structural Health Monitoring
Using a cRIO to remotely assess structural health
By connecting these systems with a host PC, we can monitor continuous vibration activity and alarm conditions on a variety of structures despite inclement weather.
Continuously monitoring the structural health of the Long Island Railroad (LIRR) Viaduct despite the relative inaccessibility of the structure.
Using CompactRIO, LabVIEW FPGA, and the LabVIEW Digital Filter Design Toolkit to measure the modal analysis of vibration data generated from ambient excitation, capture this data remotely, and analyze significant events.
Engineers use structural vibrations to assess the conditions of many constructions and machines, including buildings, bridges, dams, towers, cranes, and mountings. Although we have had tools to monitor structural vibration for decades, these tools restrict data collection to short durations of high-fidelity waveforms or longer durations of summarized power in frequency band results. Many structures vibrate in meaningful ways only in the presence of ambient forces such as wind, vehicle activity, nearby construction, or random events such as earthquakes and tornados. Therefore, data collection needs to be active during these events.
Due to recent improvements in memory storage, processor speed, and wideband wireless communications technology, we can collect high-fidelity waveforms over long periods. We can also communicate to host PCs that aggregate structural vibration data across multiple collection locations, providing permanent data collection and superior analysis and reporting capabilities.
STRAAM Corporation, a leader in structural integrity assessment, and Viewpoint Systems, a Select National Instruments Alliance Partner, collaborated to develop a system that functions outdoors and in other less-accessible sites and maintains the capabilities of the available PC-based solution. Ultimately, we produced an enhanced version of STRAAM’s SKG CMS™ system to install on a Long Island railroad bridge.
The system needed to perform the following operations:
Collect data from accelerometers and other environmental sensors
Store weeks of data locally at full acquisition rates
Analyze custom data in real time
Publish summary statistics periodically to the host
Contain flexible architecture to handle future capabilities
Ensure secure user access control
We chose a system based on the NI CompactRIO platform and dynamic signal acquisition (DSA) C Series modules. The CompactRIO and associated C Series signal conditioning modules have an operating temperature range of -40 to 70 °C, well within typical environmental extremes for most installation locations. Additionally, the CompactRIO controller has no moving parts, increasing the mean time between failure and ensuring it can withstand physical mishandling during shipment and installation. For software, we decided to use the NI LabVIEW Real-Time Module and the LabVIEW FPGA Module. We used LabVIEW FPGA for basic signal acquisition as well as some custom antialiasing filtering to allow for sampling rates below the capabilities of the DSA module.
Figure 1 – Equipment mounted to LIRR Support Beam
Data Acquisition and Filtering
The DSA module acquired acceleration signals via special sensors, supplied by STRAAM, that output information about tilt and acceleration. Because large structures resonate at low frequencies, it is important that these sensors have extremely low noise, high dynamic range, and low frequency response to gather information about structures at less than 1 Hz. The low frequency range and long-term data storage need combine to create a maximum data collection rate frequency of 200 samples per second (S/s). The NI 9239 does not sample that slowly due to its delta-sigma converter technology, so we sampled at 2,000 S/s and used lowpass digital filtering on the field-programmable gate array (FPGA) to produce an antialiased signal at 200 S/s. Simple subsampling through decimation would violate the Nyquist criterion. Using the LabVIEW Digital Filter Design Toolkit, we produced a 28-tap infinite impulse response (IIR) filter with a 3 dB roll-off at 0.8 times the sample rate with a stopband attenuation greater than 90 dB. The Digital Filter Design Toolkit includes tools to automatically generate code to deploy the filter to the FPGA. We carefully selected fixed-point arithmetic to ensure proper operation without using excessive FPGA resources. The final filter was a 24-bit fixed-point solution with a 4-bit mantissa.
Figure 2: Remote Front Panel Displaying Acceleration Waveform Capture
Configuration, Signal Processing, and Alerts
STRAAM uses proprietary analysis routines, based on the structure’s resonant frequencies, to extract relevant information from the continuous stream of acceleration data. Because ambient energy excites the structures, we analyzed some initial data to locate these resonances. After this initial period, we configured the CompactRIO to perform the proprietary analyses based on the location of these resonances. We handled all activity in this initial setup remotely via wireless communications. We connect to CompactRIO over a wireless connection, then to a LabVIEW remote panel where we initially acquire and assign resonance bands.
The signal processing requires the spectral power and time-domain structure of the waveforms inside those resonant bands. The CompactRIO processor and FPGA module can calculate fast Fourier transform (FFT)-based power spectrums and perform time-domain filtering calculation so we can base calculations on the complicated algorithms provided by STRAAM. Furthermore, the large CompactRIO RAM can archive raw acceleration waveforms for later retrieval. The LabVIEW development environment greatly simplifies adjusting these calculations. We apply additional calculations to identify noteworthy events to alert the engineers when important conditions occur. These conditions may signify the presence of a meaningful ambient excitation or that considerable changes to the structure have occurred.
In order to successfully operate, this system needs to communicate effectively to the host PC. Because the system is deployed in almost-inaccessible and outdoor locations, all interactions with the system should occur remotely. Using cellular modems, the system connects via TCP/IP to upload important information, issue event alerts, and allow remote configuration. We designed the LabVIEW application to send periodic summary information via custom binary messages to the host with information about the condition of the structure and the CompactRIO system. The host then tallies this information along with all other SKG CMS™ systems deployed in the field. In addition to this summary information, the host can pull raw waveform data from the CompactRIO RAM. To avoid tampering and unauthorized access, we password protected all connections.
Figure 3 – Data File Configuration Screen
We have successfully installed several functional SKG CMS™ systems based on the CompactRIO platform. By connecting these systems with a host PC, we can monitor continuous vibration activity and alarm conditions on a variety of structures despite inclement weather. Our customers enjoy the benefits of modern Ethernet-driven, Web-based connectivity to verify the status of their structures and we enjoy the benefits of the rugged, reliable, low-cost, and reprogrammable CompactRIO system for data collection.
Play back digital test patterns for the RF receiver at real-time rates to understand bit-error rates
Understand effects of RF chain prior to digitization
Allow for platform to assist with algorithm development, debug and optimization
We utilized off-the-shelf hardware combined with custom software and had a working system after ~7 man-weeks of effort. The DRAP system records and plays back digital data only, with A/D conversion being handled by the DUT. The system was developed on the National Instruments PXI Express platform. A RAID array of disks is used to continuously record data. Data manipulation is performed on a Xilinx Kintex-7 FPGA that forms the basis of a National Instruments High Speed Serial board. The DRAP system is connected to the RF receiver using standard SFP+ connectors. A UI connects to the system locally or over Ethernet to monitor and control DRAP during record/playback. The customer can also control the system via an API so that it can be integrated into a larger test system.
Allows for repeatable data through the processing chain.
Can re-sample data, inject new headers into data packets, and re-pack new data.
Replacing Obsolete Custom Electronics with cRIOs in High-Power Capacitor Testing
Modular Embedded cRIO Systems Shortens Development and Reduces Risk in Complex PC-based Test System
Client: A major manufacturer of electrical power generation and distribution equipment.
This project involved retrofitting a test system used to verify operation of a high-power capacitor used in electrical power distribution. This system was originally built around 1990. Critical sections of the original test system relied on custom, wire-wrapped analog and digital circuitry to process, analyze, and isolate the high-voltage and high-current signals created by the capacitor. Analog filters, rectifiers, and comparators produced pass/fail status signals. A master PC, other measurement and control equipment, the analog circuits, and a six-position carousel were integrated to create the entire automated test and control system.
For each unit under test (UUT), test specifications are obtained from a Manufacturing Execution System (MES) and cached locally. The subsystems at each carousel position are designed to run independently. This parallel capability allows greater throughput and reduced test time per capacitor unit. In addition, as different capacitor models move through the carousel stations, the test parameters and conditions must be aware of the particular model being tested.
Test results for UUT are pushed back to the MES system for record retention and data mining. The existing MES interfaces were retained exactly for the retrofit.
All capacitors require 100% testing prior to shipment, so the test system is critical for the facility operation. Two or even three shifts are common depending on production needs and the facility cannot afford any significant downtime. Thus, a challenge was to design and build a test system that worked and was very robust.
Another huge challenge was the lack of documentation on the existing system, requiring a sizable amount of reverse engineering to understand the test system operation before development on the new system could begin.
Furthermore, one of the most important challenges surrounded replacement of substantial amounts of original test equipment before the new test equipment could be installed. Thus, we absolutely had to minimize the time and risk in this upgrade changeover.
A schematic of the overall system architecture is shown in the figure. The major components of the system are:
Master PC for supervisory control and test execution management
NI cRIOs with FPGAs and Ethernet for independent yet PC-supervised operation
Station-specific FPGA code for replacing wire-wrap circuitry functionality
Integration with existing MES, safety equipment, tooling, and measurement hardware
The architecture chosen was made very modular by the capabilities offered by the cRIO. The Master PC interfaced with station-specific measurement instrumentation as needed, such as GPIB controlled equipment, and coordinated control and outcomes from the cRIOs. This additional equipment is not shown in the figure.
The Master PC coordinated all the activities including interfacing with the existing MES database and printers at the manufacturing facility. In addition, this PC provided the operator interface and, when needed, access to engineering screen on a diagnostic laptop.
The cRIOs were essential to the success of this test system. Each cRIO functioned as the equivalent of a high-speed standalone instrument.
The cRIOs at each carousel test position had to provide the following features:
Digital I/O for machine feedback, safeties, and fault conditions
State machines to coordinate with external commands and signals
Perform numeric calculations to emulate the old analog circuitry
Control loops for currents associated with voltages needed by different capacitors
Communication support with the master PC
Computation and detection of internal fault and UUT pass/fail conditions
We were able to duplicate the behavior of the wire-wrapped circuitry by converting the schematic diagrams of these circuits into FPGA code and then tweaking that code to mimicking the actual signals we measured with data acquisition equipment on the original test hardware.
The outputs of the circuitry were reconstructed on the FPGA with band-pass filtering, calibration compensation, point-to-point RMS, and phase & frequency functions. This functionality was implemented in fixed-point math and the 24-bit inputs on the A/D provided sufficient resolution and bandwidth for a faithful reproduction of the electronic circuitry. These embedded cRIOs provided a very effective solution to what otherwise might have required another set of costly and rigid custom circuits.
Finally, for optimizing the task of replacing the old equipment, we used a set of cRIOs, not shown in Figure 1, to provide Hardware-In-the-Loop (HIL) simulation of the manufacturing and measurement equipment. These cRIOs imitated the rest of the machine by providing inputs to and reacting to outputs from the embedded cRIO controllers, thus supporting comprehensive verification of the new test system before the tear-out of the existing hardware. Furthermore, these HIL cRIOs enabled fault injection for conditions that would have been difficult and possibly dangerous to create on the actual equipment.