01. What is Battery Charge and Discharge Equipment?
Battery charge and discharge equipment refers to power electronics devices that perform charging and discharging tests on various types of batteries, such as power batteries, energy storage batteries, lithium-ion batteries (including ternary lithium, lithium iron phosphate, nickel-metal hydride), flow batteries, aluminum batteries, and sodium-ion batteries.
02. What Are the Purposes of Battery Charge and Discharge Testing?
The main purposes of battery testing generally fall into two categories:
Understanding the Battery's Characteristics (from the battery's perspective): Testing helps to understand essential parameters of the battery such as capacity, internal resistance, voltage characteristics, rate performance, temperature characteristics, cycle life, and energy density. These parameters are necessary not only to verify whether the tested battery meets the original design goals but also to optimize management and control during the battery's usage.
Evaluating the Battery’s Ability to Meet Application Requirements (from the application perspective): This type of testing can be understood as determining the characteristics a battery should meet based on specific application needs and then verifying through testing whether the battery meets the required standards. For example, automotive manufacturers may design low-temperature cold start tests based on driving conditions, energy efficiency tests based on power consumption, and power performance tests based on scenarios like hill climbing or acceleration. Safety tests like overcharge, over-discharge, short-circuit, over-temperature, and puncture tests are also part of this process.
03. Key Testing Functions of Charge and Discharge Equipment
Capacity Testing: Capacity testing uses a static capacity test (SCT) method to measure the usable capacity (including energy) of the battery under different environmental temperatures. While testing methods vary slightly across different companies and standards, the general approach remains the same. For example, in a standard environment (25°C), the battery is fully charged according to the manufacturer's specifications. After sufficient rest in the test environment, the battery is discharged at a 1C rate until the cutoff voltage (2.5V) is reached, and the released capacity (energy) is recorded. Multiple tests are typically conducted to improve accuracy.
The output of capacity testing is a temperature vs. capacity (energy) relationship chart. The X-axis represents capacity (C), and the Y-axis represents voltage (V). The points on the X-axis for 1C discharge cutoff at different temperatures represent the relationship between capacity and temperature. The area ratio between each discharge curve and the X-Y axis indicates the usable energy ratio at different temperatures.Rate Performance Testing: Rate performance testing requires setting different charge and discharge rates based on the battery's power characteristics (energy-type vs. power-type). For example, the charge current is set to a 0.5C constant current charge, while discharge rates of 0.2C, 0.5C, 1C, and 2C are applied to obtain discharge curves at different rates, along with constant current charge curves.
Self-Discharge Testing: Self-discharge testing determines the capacity loss of a battery after a set period of storage. There are various testing methods, such as the capacity recovery method outlined in GBT 31486. This test involves placing a fully charged battery at room temperature for 28 days, then discharging it and recording the retained capacity. The battery is then tested again using SCT to determine the current capacity. The self-discharge rate is calculated by comparing the retained capacity (Cr) with the current capacity (Ct). To shorten the self-discharge testing cycle, accelerated high-temperature testing and Open Circuit Voltage (OCV) methods are sometimes used, such as storing the battery at 55°C for 7 days and estimating the self-discharge rate based on changes in OCV.
DC Internal Resistance (DCIR) Testing: In this test, a current step change is applied across the battery terminals, resulting in a change in voltage. The DC internal resistance is calculated using the current and voltage differences (ΔI and ΔU) from two different load currents:
Working Condition Simulation, Pulse, and Ramp Testing: These tests simulate actual usage scenarios, evaluating the battery's performance under various conditions.
04. Selection of Charge and Discharge Equipment
The selection of charge and discharge equipment primarily depends on four factors: voltage, current, channel count, and power.
Voltage: Determine the battery material (such as ternary or lithium iron phosphate) and how many series are involved.
Current: Based on the battery cell's capacity and how many parallel cells are used, as well as the desired charge and discharge rates.
Channel Count: Based on daily production capacity and how many sets of batteries need to be tested per day. A typical unit has 2 or 4 channels.
Power: Based on the voltage and current to be tested, some equipment tests based on constant power, while others use full power ratings.
For example, in a containerized energy storage system, if each single package is a 52-series 1-parallel 280AH lithium iron phosphate battery, assuming a 1C charge/discharge rate, the charging and discharging equipment selection would be:
Voltage: 52*3.65 = 189.8V, so equipment should select 200V or above.
Current: 280*1 = 280A, so the current per channel should be 300A or higher.
Power: 189.8*280 = 60kW per channel. For an entire system of 8 packs in series:
Voltage: 523.658 = 1518.4V, equipment should choose 1600V or 1650V.
Current: 280*0.5 = 140A, select 150A or 200A per channel, or even 300A or 400A for 1C charging.
Power: 1518.4*140 = 212.6kW, with a constant power selection of 176kW for charge/discharge.