Based on the existing content of the security knowledge network, we organize the content in combination with some common foreign video surveillance terms to help industry professionals quickly understand the surveillance industry. The following is the second part:
MAC address: Media Access Control Address is also known as LAN Address, MAC Address, Ethernet Address, and Physical Address. It is an address used to confirm the location of a network device. In the OSI model, the third network layer is responsible for IP addresses, and the data link layer of the second layer is responsible for the MAC Address. The MAC address is used to uniquely identify a network card in the network. If a device has one or more network cards, each network card has a unique MAC Address.
Machine learning: Machine learning is a multi-disciplinary intersection involving probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and other disciplines. Specialized research computers can simulate or achieve human learning behaviors to acquire new knowledge or skills and reorganize existing knowledge structures to continuously improve their performance. It is the core of artificial intelligence and the fundamental way to make computers intelligent.
Video bit rate: Video bit rate refers to the number of bits transmitted per second. The unit is BPS (Bit Per Second). The higher the bit rate, the more data is transmitted per second and the clearer the picture. The bit rate in the video refers to the amount of binary data per unit time after the optical signal is converted into a digital image signal, which is an indirect indicator to measure the video quality. The principle of bit rate (code rate) in the video is the same as that in communication, which refers to the amount of binary data per unit time after the analog signal is converted into a digital signal.
MP: Abbreviation for mega pixel.
MSRP: Manufacturer's suggested retail price.
NAS: Network Attached Storage is a device connected to the network with a data storage function, and it is also called "Network Storage". It is a dedicated data storage server. It is data-centric, which can completely separate storage devices from servers and centrally manage data to release bandwidth, improve performance, reduce total cost of ownership and protect investments. In surveillance applications, NAS refers to small devices used to store video on the network.
NIC: Network Interface Controller (NIC) is also known as Network Adapter, Network Interface Card, and LAN Adapter. It is a piece of computer hardware designed to allow computers to communicate over computer networks.
NTP: Network Time Protocol (NTP) is a protocol for synchronizing computer time. It enables the computer to synchronize its server or clock source (such as quartz clock and GPS). It can provide high-precision time correction (less than 1 millisecond on LAN and tens of milliseconds on WAN), and can prevent malicious protocol attacks through encrypted confirmation. NTP is designed to provide accurate and robust time services in a disordered Internet environment. Time is essential if a video is used as evidence.
NVR: Network Video Recorder is the storage and forwarding part of the network video surveillance system. NVR works with video encoders or network cameras to complete video recording, storage and forwarding functions.
ONVIF: Open Network Video Interface Forum is a global open industry forum. The goal is to promote a global open standard for the development and use of security product interfaces. ONVIF has created a standard for how IP products in video surveillance and other physical security domains communicate with each other. ONVIF was founded in 2008 by Axis Communications, Bosch Security Systems and Sony.
Profile A: It covers common routine access control functions. It is suitable for security personnel, receptionists, and human resources specialists who are responsible for granting and revoking employee credentials, creating and updating schedules, and changing access control permissions within the system. It has strengthened the functionality and system management functions of the user terminal to achieve the interconnection of the access control market.
Profile C: It enables system integrators, service providers and consultants to achieve interoperability between the client and the Physical Access Control System (PACS) device as well as the network video system. The new standard improves the compatibility between access control front-end devices and terminals, and simplifies installation procedures. The required training time can be also significantly reduced since multiple specialized monitoring devices for handling different PACS devices are not required. As part of the network access control system, Profile C compatible devices provide information about access control and entry points in the system. Profile C compatible software clients enable monitoring and alarming access and entry point conditions (such as unlocking and entering), and it also has other similar functions. Also, Profile C compatible software clients can provide basic access control functions such as enter and door locking/unlocking.
Profile D: It is suitable for input interfaces of peripheral devices such as token readers (used for reading cards, keys, mobile phones or barcodes), biometric readers (used for fingerprint recognition), cameras (used for iris, facial or license plate recognition), buttons, sensors (used for identifying lock status, door status, temperature or action) and part of output devices (such as locks, displays and LEDs).
Profile G: It includes technical specifications for on-board video storage, search, retrieval, and media playback functions. Profile G further improves the interoperability of on-site recording and video storage for a variety of surveillance equipment and systems such as video cameras, encoders, network video recorders (NVR), video management systems, building management systems, and physical security information management (PSIM) systems.
Profile M: It is used to analyze the metadata and events of the application, and it supports analysis configuration, information query, filtering and streaming of the metadata. It has interfaces for generic object classification and for specifying metadata such as geographic locations, vehicles, license plates, faces and the human body. If compliant products support functions such as media profile management, video streaming, adding images to metadata streams, event processing or rule configuration, they can also support the Profile M interface for these events. If the compliant product supports object statistics (such as people or vehicles), license plate recognition or facial recognition analysis functions, and the MQTT (Message Queuing Telemetry Transport) protocol used by Internet of Things systems, the Profile M event processing interface can also be used for these functions.
Profile Q: It provides innovative functions for system integrators and end-users to simplify the installation and connection of systems and devices through an easy installation mechanism and basic device configuration. Profile Q also supports Transport Layer Security (TLS), and the secure communication protocol enables ONVIF compliant devices to communicate with customers over the network without the threat of tampering and eavesdropping.
Profile S: It describes the common functions shared by ONVIF compliant video management. These systems and devices include IP cameras or encoders that send, configure, request or control media data streams over IP networks. The profile includes specific functions such as pan, tilt, zoom controls, audio streaming and relay output.
Profile T: It is designed for IP-based video systems. Profile T supports video streaming capabilities such as the use of H.264 and H.265 encoding formats, imaging settings, and alert events such as motion and tamper detection. Mandatory features on decides also include screen display and metadata streaming, and mandatory features on clients also include PTZ control. Profile T also covers the ONVIF specification for HTTPS streaming, PTZ configuration, motion zone configuration, digital inputs and relay outputs, as well as two-way audio for compliant devices and clients that support such features.
PoE: Power Over Ethernet refers to the technology that can transmit data signals for IP-based terminals (such as IP phones, wireless LAN access points AP, and network cameras) while providing DC power to such devices without any changes in the existing Ethernet Cat.5 cabling infrastructure. POE is also known as Power over LAN (POL), Active Ethernet or Power over Ethernet. It uses the latest standard specification for simultaneously transmitting data and electrical power by using standard Ethernet transmission cables, which maintain compatibility with existing Ethernet systems and users.
PPM: Pixel Per Meter is a measure of image quality, which is calculated as image width (in pixels)/field of view (in meters).
PSIM: Physical Security Information Management System is a unique software solution, which is not controlled by any vendor or device management system. PSIM software can achieve the cooperation of different systems and receive information input from different sensors. Multi-agent cooperation under the same platform is possible for both first responders like schools, government agencies, corporate branches and each decentralized site.
PTZ: It is the abbreviation for Pan/Tilt/Zoom, representing omnidirectional (left/right/up/down) movement, lens zoom and zoom control.
RAID: Redundant Arrays of Independent Disks (RAID) means "an array of independent disks with redundant capability".
RTSP: Real Time Streaming Protocol (RFC2326) is an application layer protocol in the TCP/IP protocol system. It is an IETF RFC standard developed by Columbia University, Netscape and RealNetworks. The protocol defines how one-to-many applications can efficiently transmit multimedia data over IP networks. RTSP is on top of RTP and RTCP in the structure, and it uses TCP or UDP for data transmission.
Thermal imaging: It is a detection device that detects infrared energy (heat) through non-contact and converts it into electrical signals, thereby generating thermal images and temperature values on the display, and calculating the temperature values.
Software definition: It uses software to define the functions of the system, and the software is used to empower hardware to maximize the operating efficiency and energy efficiency of the system such as software-defined cameras.
SNR: It refers to the Signal to Noise Ratio. It can directly reflect the anti-interference ability of the camera image to noise, and whether the picture is clean without noise bright spots in the reflection of picture quality. It can also be named the ratio of the output signal power of the video amplifier to the noise output at the same time, which is often expressed in decibels (dB).
SAN: Storage Area Network (SAN) adopts Fiber Channel (FC) technology. It connects storage arrays to server hosts through FC switches to establish an area network dedicated to data storage. After more than ten years of development, SAN has been quite mature and has become the de-facto standard in the industry (but the fiber switching technologies of various manufacturers are not completely the same, and they have compatibility requirements for their servers and SAN storage).
SNMP: Simple Network Management Protocol (SNMP) is a standard protocol specially designed to manage network nodes (servers, workstations, routers, switches and HUBS) in IP networks. It is an application layer protocol.
SoC: In general, SoC is called system-on-chip. It is a product and an integrated circuit with a specific purpose, which contains the complete system and has all the content embedded in the software. At the same time, it is a technology that achieves the whole process from determining the system function to the software/hardware division and completing the design.
Static IP address assignment: It refers to assigning a fixed IP address to each computer. The advantage is that it is easy to manage, especially in LANs where network traffic is restricted by IP address. It can be managed based on the traffic generated by the fixed IP address or IP address groups, which can avoid the complex process of identity authentication every time a user accesses the Internet when charging by user mode and avoid users often forgetting passwords.
UPS: Uninterruptible Power Supply contains energy storage devices. It is mainly used to provide an uninterrupted power supply to some equipment that requires high power stability.
VCA: Video Content Analysis.
VBR: Variable Bit Rate has no fixed bit rate. The compression software can immediately determine the use of the bit rate, which is based on the quality of the premise of the file size.
VMD: Motion Detection Technology is also called Motion Detection, which is often used for unattended surveillance video and automatic alarms. The images collected by the camera at different frame rates will be calculated and compared by the CPU according to certain algorithms. When the picture changes, such as when someone walks by and the camera is moved, the number obtained from the calculation and comparison result will exceed the threshold and indicate the system to automatically make corresponding processing.
VMS: Video Manage System.
VOIP: Voice over Internet Protocol (VoIP) is a voice call technology that uses Internet Protocol (IP) to achieve voice calls and multimedia meetings, which is to communicate via the Internet. It has other informal names such as IP telephony, Internet telephony, broadband telephony, and broadband phone service.
VSaaS: It refers to Video Surveillance as a Service, it is similar to Software as a Service (SaaS) in the software industry. In the Internet era, desktop software will go to online services. Also, professional video surveillance will gradually go to online services.
Deep learning: Deep learning (DL) is a new research direction in the field of Machine Learning (ML). It is introduced into machine learning to make it closer to the original goal - Artificial Intelligence (AI). Deep Learning is to learn the internal rules and representation levels of sample data, and the information obtained in the learning processes is of great help to the interpretation of data such as text, images and sounds. The ultimate goal is to enable machines to have the ability to analyze and learn like humans, and to recognize data such as text, images, and sounds.