faq

banner1
banner2
banner3
banner4
banner5
banner6
banner7
banner8
banner9

What do you mean by DVR resolution?

CIF (Common Intermediate Format) is used to standardize the horizontal and vertical resolutions in pixels of YUV sequences in video signals. QCIF means “Quarter CIF”. SQCIF means “Sub-quarter CIF”. These two formats are common in video encoding. To have one fourth of the area as “quarter” implies, height and width of the frame are halved. A CIF is commonly defined as one-quarter of the ‘full’ resolution of the video system it is intended for (listed below as 4CIF). Note that this full resolution does not match what is currently referred to as D1 video (based upon Sony’s D1 format).

what is currently referred to as D1 video (based upon Sony’s D1 format).

Format PAL-based
SQCIF 128 × 96
QCIF 176 × 144
QCIF+ 176 × 220
CIF 352 × 288
4CIF 704 × 576
9CIF 1056 × 864
16CIF 1408 × 1152
D1 720 × 576
SIF* 352 × 288
SIF 360 × 288

* = Rounded to the nearest multiple of 16.

What is a H.264?

H.264, MPEG-4 Part 10, or AVC, for Advanced Video Coding, is a digital video codec standard which is noted for achieving very high data compression. It was written by the ITU-T Video Coding Experts Group (VCEG) together with the ISO/IEC Moving Picture Experts Group (MPEG) as the product of a collective partnership effort known as the Joint Video Team (JVT). The ITU-T H.264 standard and the ISO/IEC MPEG-4 Part 10 standard (formally, ISO/IEC 14496-10) are technically identical. The final drafting work on the first version of the standard was completed in May of 2003.

H.264 is a name related to the ITU-T line of H.26x video standards, while AVC relates to the ISO/IEC MPEG side of the partnership project that completed the work on the standard, after earlier development done in the ITU-T as a project called H.26L. It is usual to call the standard as H.264/AVC (or AVC/H.264 or H.264/MPEG-4 AVC or MPEG-4/H.264 AVC) to emphasize the common heritage. The name H.26L, harkening back to its ITU-T history, is far less common, but still used. Occasionally, it has also been referred to as “the JVT codec”, in reference to the JVT organization that developed it. (Such partnership and multiple naming is not unprecedented, as the video codec standard known as MPEG-2 also arose from a partnership between MPEG and the ITU-T, and MPEG-2 video is also known in the ITU-T community as H.262.)

The intent of H.264/AVC project has been to create a standard that would be capable of providing good video quality at bit rates that are substantially lower (e.g., half or less) than what previous standards would need (e.g., relative to MPEG-2, H.263, or MPEG-4 Part 2), and to do so without so much of an increase in complexity as to make the design impractical (excessively expensive) to implement. An additional goal was to do this in a flexible way that would allow the standard to be applied to a very wide variety of applications (e.g., for both low and high bit rates, and low and high resolution video) and to work well on a very wide variety of networks and systems (e.g., for broadcast, DVD storage, RTP/IP packet networks, and ITU-T multimedia telephony systems).

The JVT recently completed the development of some extensions to the original standard that are known as the Fidelity Range Extensions (FRExt). These extensions support higher-fidelity video coding by supporting increased sample accuracy (including 10-bit and 12-bit coding) and higher-resolution color information (including sampling structures known as YUV 4:2:2 and YUV 4:4:4). Several other features are also included in the Fidelity Range Extensions project (such as adaptive switching between 4×4 and 8×8 integer transforms, encoder-specified perceptual-based quantization weighting matrices, efficient inter-picture lossless coding, support of additional color spaces, and a residual color transform). The design work on the Fidelity Range Extensions was completed in July of 2004, and the drafting was finished in September of 2004.

Since the completion of the original version of the standard in May of 2003, the JVT has also completed two generations of “corrigendum” errata corrections to the text of the standard.

How can you explain IP / TCP / DHCP / UDP?

IP:
(pronounced as separate letters) Short for Internet Protocol. IP specifies the format of packets, also called datagrams, and the addressing scheme. Most networks combine IP with a higher-level protocol called Transmission Control Protocol (TCP), which establishes a virtual connection between a destination and a source.

IP by itself is something like the postal system. It allows you to address a package and drop it in the system, but there’s no direct link between you and the recipient. TCP/IP, on the other hand, establishes a connection between two hosts so that they can send messages back and forth for a period of time.

The current version of IP is IPv4. A new version, called IPv6 or IPng, is under development.

Transmission Control Protocol (TCP):

The Transmission Control Protocol (TCP) is one of the core protocols of the Internet protocol suite, often simply referred to as TCP/IP. Using TCP, applications on networked hosts can create connections to one another, over which they can exchange streams of data using Stream Sockets. The protocol guarantees reliable and in-order delivery of data from sender to receiver. TCP also distinguishes data for multiple connections by concurrent applications (e.g., Web server and e-mail server) running on the same host.
TCP supports many of the Internet’s most popular application protocols and resulting applications, including the World Wide Web, e-mail, File Transfer Protocol and Secure Shell.

In the Internet protocol suite, TCP is the intermediate layer between the Internet Protocol (IP) below it, and an application above it. Applications often need reliable pipe-like connections to each other, whereas the Internet Protocol does not provide such streams, but rather only best effort delivery (i.e., unreliable packets). TCP does the task of the transport layer in the simplified OSI model of computer networks. The other main transport-level Internet protocol is UDP.

Applications send streams of octets (8-bit bytes) to TCP for delivery through the network, and TCP divides the byte stream into appropriately sized segments (usually delineated by the maximum transmission unit (MTU) size of the data link layer of the network to which the computer is attached). TCP then passes the resulting packets to the Internet Protocol, for delivery through a network to the TCP module of the entity at the other end. TCP checks to make sure that no packets are lost by giving each packet a sequence number, which is also used to make sure that the data are delivered to the entity at the other end in the correct order. The TCP module at the far end sends back an acknowledgement for packets which have been successfully received; a timer at the sending TCP will cause a timeout if an acknowledgement is not received within a reasonable round-trip time (or RTT), and the (presumably lost) data will then be re-transmitted. The TCP checks that no bytes are damaged by using a checksum; one is computed at the sender for each block of data before it is sent, and checked at the receiver.

Dynamic Host Configuration Protocol (DHCP):
DHCP is a set of rules used by communications devices such as a computer, router or network adapter to allow the device to request and obtain an IP address from a server which has a list of addresses available for assignment.

DHCP is a protocol used by networked computers (clients) to obtain IP addresses and other parameters such as the default gateway, subnet mask, and IP addresses of DNS servers from a DHCP server. It facilitates access to a network because these settings would otherwise have to be made manually for the client to participate in the network.

The DHCP server ensures that all IP addresses are unique, e.g., no IP address is assigned to a second client while the first client’s assignment is valid (its lease has not expired). Thus IP address pool management is done by the server and not by a human network administrator.

DHCP emerged as a standard protocol in October 1993. As of 2006, RFC 2131 provides the latest ([dated March 1997]) DHCP definition. DHCP functionally became a successor to the older BOOTP protocol, whose leases were given for infinite time and did not support options. Due to the backward-compatibility of DHCP, very few networks continue to use pure BOOTP.
The latest non-standard of the protocol, describing DHCPv6 (DHCP in an IPv6 environment), appeared in July 2003 as RFC 3315

User Datagram Protocol (UDP):

The User Datagram Protocol (UDP) is one of the core protocols of the Internet protocol suite. Using UDP, programs on networked computers can send short messages sometimes known as datagrams (using Datagram Sockets) to one another. UDP is sometimes called the Universal Datagram Protocol or Unreliable Datagram Protocol.

UDP does not provide the reliability and ordering that TCP does. Datagrams may arrive out of order, appear duplicated, or go missing without notice. Without the overhead of checking whether every packet actually arrived, UDP is faster and more efficient for many lightweight or time-sensitive purposes. Also, its stateless nature is useful for servers that answer small queries from huge numbers of clients. Compared to TCP, UDP is required for broadcast (send to all on local network) and multicast (send to all subscribers).

Common network applications that use UDP include the Domain Name System (DNS), streaming media applications such as IPTV, Voice over IP (VoIP), Trivial File Transfer Protocol (TFTP) and online games.

Please explain about the Networking basics?

Here are some of the fundamental parts of a network:

Network – A network is a group of computers connected together in a way that allows information to be exchanged between the computers.
Node – A node is anything that is connected to the network. While a node is typically a computer, it can also be something like a printer or CD-ROM tower.
Segment – A segment is any portion of a network that is separated, by a switch, bridge or router, from other parts of the network.
Backbone – The backbone is the main cabling of a network that all of the segments connect to. Typically, the backbone is capable of carrying more information than the individual segments. For example, each segment may have a transfer rate of 10 Mbps (megabits per second), while the backbone may operate at 100 Mbps.
Topology – Topology is the way that each node is physically connected to the network (more on this in the next section).
Local Area Network (LAN) – A LAN is a network of computers that are in the same general physical location, usually within a building or a campus. If the computers are far apart (such as across town or in different cities), then a Wide Area Network (WAN) is typically used.
Network Interface Card (NIC) – Every computer (and most other devices) is connected to a network through an NIC. In most desktop computers, this is an Ethernet card (normally 10 or 100 Mbps) that is plugged into a slot on the computer’s motherboard.
Media Access Control (MAC) address – This is the physical address of any device — such as the NIC in a computer — on the network. The MAC address, which is made up of two equal parts, is 6 bytes long. The first 3 bytes identify the company that made the NIC. The second 3 bytes are the serial number of the NIC itself.
Unicast – A unicast is a transmission from one node addressed specifically to another node.
Multicast – In a multicast, a node sends a packet addressed to a special group address. Devices that are interested in this group register to receive packets addressed to the group. An example might be a Cisco router sending out an update to all of the other Cisco routers.
Broadcast – In a broadcast, a node sends out a packet that is intended for transmission to all other nodes on the network.

What is the use of Ping?

The correct title of this article is ping. The initial letter is capitalized because of technical restrictions.
For other uses, see Ping (disambiguation).

ping in a Windows 2000 command window
Ping is a computer network tool used to test whether a particular host is reachable across an IP network. Ping works by sending ICMP “echo request” packets to the target host and listening for ICMP “echo response” replies. Using interval timing and response rate, ping estimates the round-trip time and packet loss rate between hosts
History
Mike Muuss wrote the program in December, 1983, as a tool to troubleshoot odd behavior on an IP network. He named it after the pulses of sound made by a sonar, since its operation is analogous to active sonar in submarines, in which an operator issues a pulse of energy (a network packet) at the target, which then bounces from the target and is received by the operator. Later David L. Mills provided a backronym, “Packet Internet Grouper (Groper)”, also by other people “Packed Internet Gopher”, after the small rodents.
The usefulness of ping in assisting the “diagnosis” of Internet connectivity issues was impaired from late in 2003, when a number of Internet Service Providers filtered out ICMP Type 8 (echo request) messages at their network boundaries. This was partly due to the increasing use of ping for target reconnaissance, for example by Internet worms such as Welchia that flood the Internet with ping requests in order to locate new hosts to infect. Not only did the availability of ping responses leak information to an attacker, it added to the overall load on networks, causing problems to routers across the Internet.
There are two schools of thought concerning ICMP on the public Internet: those who say it should be largely disabled to enable network ‘stealth’, and those who say it should be enabled to allow proper Internet diagnostics.
Sample pinging
The following is a sample output of pinging www.google.com under Linux with the iputils version of ping:
$ ping www.google.com
PING www.l.google.com (64.233.183.103) 56(84) bytes of data.
64 bytes from 64.233.183.103: icmp_seq=1 ttl=246 time=22.2 ms
64 bytes from 64.233.183.103: icmp_seq=2 ttl=245 time=25.3 ms
64 bytes from 64.233.183.103: icmp_seq=3 ttl=245 time=22.7 ms
64 bytes from 64.233.183.103: icmp_seq=4 ttl=246 time=25.6 ms
64 bytes from 64.233.183.103: icmp_seq=5 ttl=246 time=25.3 ms
64 bytes from 64.233.183.103: icmp_seq=6 ttl=245 time=25.4 ms
64 bytes from 64.233.183.103: icmp_seq=7 ttl=245 time=25.4 ms
64 bytes from 64.233.183.103: icmp_seq=8 ttl=245 time=21.8 ms
64 bytes from 64.233.183.103: icmp_seq=9 ttl=245 time=25.7 ms
64 bytes from 64.233.183.103: icmp_seq=10 ttl=246 time=21.9 ms
10 packets transmitted, 10 received, 0% packet loss, time 9008ms
rtt min/avg/max/mdev = 21.896/24.187/25.718/1.619 ms
This output shows that www.google.com is a DNS CNAME record for www.l.google.com which then resolves to 64.233.183.103. The output then shows the results of making 10 pings to 64.233.183.103 with the results summarized at the end.
• smallest ping time was 21.896 milliseconds
• average ping time was 24.187 milliseconds
• maximum ping time was 25.718 milliseconds
The following is a sample output of pinging www.google.com under Microsoft Windows XP with its built-in version of ping:
C:\>ping www.google.com
Pinging www.l.google.com [64.233.183.103] with 32 bytes of data:
Reply from 64.233.183.103: bytes=32 time=25ms TTL=245
Reply from 64.233.183.103: bytes=32 time=22ms TTL=245
Reply from 64.233.183.103: bytes=32 time=25ms TTL=246
Reply from 64.233.183.103: bytes=32 time=22ms TTL=246
Ping statistics for 64.233.183.103:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 22ms, Maximum = 25ms, Average = 23ms
This output shows that www.google.com is a DNS CNAME record for www.l.google.com which then resolves to 64.233.183.103. The output then shows the results of making 4 pings to 64.233.183.103 with the results summarized automatically at the end.
• smallest ping time was 22 milliseconds
• average ping time was 23 milliseconds
• maximum ping time was 25 milliseconds
Ping of death
Jump to: navigation, search
Ping of Death can also mean Ping of Death (POD)-Bots, a type of computer game bot used in Counter-Strike.
A ping of death (abbreviated “POD”) is a type of attack on a computer that involves sending a malformed or otherwise malicious ping to a computer. A ping is normally 64 bytes in size; many computer systems cannot handle a ping larger than the maximum IP packet size which is 65,535 bytes. Sending a ping of this size often crashes the target computer.
This bug has traditionally been relatively easy to exploit. Generally, sending a ping packet of a size such as 65,536 bytes is illegal according to networking protocol, but a packet of such a size can be sent if it is fragmented; when the target computer reassembles the packet, a buffer overflow can occur, which often causes a system crash.
This exploit has affected a wide variety of systems, including Unix, Linux, Mac, Windows, printers, and routers. However most systems since 1997-1998 or so have been fixed, so this bug is mostly historical.
It should be noted that in the recent years a different kind of ping attack has become wide-spread, ping flooding. The idea there is to simply flood the victim with so much ping traffic that normal traffic will fail to reach the system.

What is the purpose of Bridge?

A device that connects two local-area networks (LANs), or two segments of the same LAN that use the same protocol, such as Ethernet or Token-Ring.

What is the applicability of Router?

(row´ter) (n.) A device that forwards data packets along networks. A router is connected to at least two networks, commonly two LANs or WANs or a LAN and its ISP’s network. Routers are located at gateways, the places where two or more networks connect.
Routers use headers and forwarding tables to determine the best path for forwarding the packets, and they use protocols such as ICMP to communicate with each other and configure the best route between any two hosts.
Very little filtering of data is done through routers.

NVR - Network Video Recorder ?

A Network Video Recorder gets the video from IP Cameras over network and stores it onto a HDD media. Display, Playback and more other features allows you to enjoy the IP Megapixel technology, every frame in detail.

State-of-art system architecture design in both hardware and software, software design for system scalability. High quality integration and high performance computing platforms for mission-critical surveillance solutions, especially in the full HD and megapixel video surveillance with Open platform Camera integration, ONVIF 2.0 and above and strand streaming protocol support.

Explain the importance of Gateway?

a. A node on a network that serves as an entrance to another network. In enterprises, the gateway is the computer that routes the traffic from a workstation to the outside network that is serving the Web pages. In homes, the gateway is the ISP that connects the user to the internet.

In enterprises, the gateway node often acts as a proxy server and a firewall. The gateway is also associated with both a router, which use headers and forwarding tables to determine where packets are sent, and a switch, which provides the actual path for the packet in and out of the gateway.

b. A computer system located on earth that switches data signals and voice signals between satellites and terrestrial networks.

c. An earlier term for router, though now obsolete in this sense as router is commonly used.

What is the purpose of a Domain Name System(DNS)?

A network lookup service, the Domain Name System (DNS), provides the ability to map domain names to a specific IP address. The purpose of domains is that humans better remember names than arbitrary strings of numbers. It also allows an entity to change its IP addresses or reassign them without having to notify anyone except their DNS server.

Another reason for DNS is to allow, for example, a web site to be hosted on multiple servers (each with its own IP address) which allows for rudimentary load balancing.

What is the difference between a Dynamic and Static IP Addresses?

IP addresses may either be assigned permanently (for example, to a server which is always found at the same address) or temporarily, from a pool of available addresses.

Dynamic

Dynamic IP addresses are issued to identify non-permanent devices such as personal computers or clients. Internet Service Providers (ISPs) use dynamic allocation to assign addresses from a small pool to a larger number of customers. This is used for dial-up access, WiFi and other temporary connections, allowing a portable computer user to automatically connect to a variety services without needing to know the addressing details of each network.

Users with a dynamic IP may have trouble running their own email server as in recent years services such as mail-abuse.org [1] have collected lists of dynamic IP ranges and blocked them.

Dynamic IP address allocation requires a server to listen for requests and then assign an address. Addresses can be assigned at randomly or based on a predetermined policy. The most common protocol used to dynamically assign addresses is Dynamic Host Configuration Protocol (DHCP). DHCP includes a lease time which determines how long the requester can use an address before requesting its renewal, allowing addresses to be reclaimed if the requester goes offline.

It is common to use dynamic allocation for private networks. Since private networks rarely have an address shortage, it is possible to assign the same address to the same computer on each request or to define an extended lease time. These two methods simulate static IP address assignment.

Static

Static IP addresses are used to identify semi-permanent devices with constant IP addresses. Servers typically use static IP addresses. The static address can be configured directly on the device or as part of a central DHCP configuration which associates the device’s MAC address with a static address.

Allocation

Main article: Internet Assigned Numbers Authority

IP addresses are allocated by a Regional Internet Registry (RIR). There are five RIRs, which are delegated by the Internet Assigned Numbers Authority. IP addresses are allocated in blocks to an organization or business, which then suballocates them to other organizations.

For example, en.wikipedia.org resolves to 207.142.131.245, which belongs to the 207.142.0.0 – 207.142.255.255 block operated by AGIS. This block belongs to the 207.0.0.0 – 207.255.255.255 block operated by American Registry for Internet Numbers (the RIR for North America).

Since IP address blocks are assigned to RIRs, which are geographically distributed, it is somewhat possible to geolocate an IP address based on its registration.

Subnetworks are used to further divide networks.

IP versions

The Internet Protocol has two primary versions in use. Each version has its own defintion of an IP address. Because of its prevalence, “IP address” typically refers to those defined by IPv4.

IP version 4

Main article: IPv4#Addressing

IPv4 uses 32-bit (4 byte) addresses, which limits the address space to 4,294,967,296 possible unique addresses. However, many are reserved for special purposes, such as private networks (~18 million addresses) or multicast addresses (~1 million addresses). This reduces the number of addresses that can be allocated as public Internet addresses, and as the number of addresses available is consumed, an IPv4 address shortage appears to be inevitable in the long run. This limitation has helped stimulate the push towards IPv6, which is currently in the early stages of deployment and is currently the only contender to replace IPv4.

IP version 5

What would be considered IPv5 existed only as an experimental non-IP real time streaming protocol called ST2, described in RFC 1819. In keeping with standard UNIX release conventions, all odd-numbered versions are considered experimental, and this version was never intended to be implemented; the protocol was not abandoned. RSVP has replaced it to some degree.

IP version 6

Main article: IPv6#Addressing

In IPv6, the new (but not yet widely deployed) standard protocol for the Internet, addresses are 128 bits wide, which, even with generous assignment of netblocks, should suffice for the foreseeable future. In theory, there would be exactly 2128, or about 3.403 × 1038 unique host interface addresses. That translates into 340,282,366,920,938,463,463,374,607,431,768,211,456 addresses. This large address space will be sparsely populated, which makes it possible to again encode more routing information into the addresses themselves.

Addressing

A version 6 address is written as eight 4-digit hexadecimal numbers separated by colons. For readability, addresses may be shortened in two ways. Within each colon-delimited section, leading zeroes may be truncated. Secondly, one string of zeroes (and only one) may be replaced with two colons (::). For example, all of the following addresses are equivalent:

• 1080:0000:0000:0000:0000:0034:0000:417A

• 1080:0:0:0:0:34:0:417A

• 1080::34:0:417A

Global unicast IPv6 addresses are constructed as two parts: a 64-bit routing part followed by a 64-bit host identifier.

Netblocks are specified as in the modern alternative for IPv4: network number, followed by a slash, and the number of relevant bits of the network number (in decimal). Example: 12AB::CD30:0:0:0:0/60 includes all addresses starting with 12AB00000000CD3.

IPv6 has many improvements over IPv4 other than just bigger address space, including autorenumbering and mandatory support for IPsec.

How do you explain a Hypertext Transfer Protocol(http)?

Hypertext Transfer Protocol (HTTP) is a method used to transfer or convey information on the World Wide Web. Its original purpose was to provide a way to publish and retrieve HTML pages.

Development of HTTP was coordinated by the World Wide Web Consortium and the Internet Engineering Task Force, culminating in the publication of a series of RFCs, most notably RFC 2616 (1999), which defines HTTP/1.1, the version of HTTP in common use today.

HTTP is a request/response protocol between clients and servers. The originating client, such as a web browser, spider, or other end-user tool, is referred to as the user agent. The destination server, which stores or creates resources such as HTML files and images, is called the origin server. In between the user agent and origin server may be several intermediaries, such as proxies, gateways, and tunnels.

An HTTP client initiates a request by establishing a Transmission Control Protocol (TCP) connection to a particular port on a remote host (port 80 by default; see List of TCP and UDP port numbers). An HTTP server listening on that port waits for the client to send a request message.

Upon receiving the request, the server sends back a status line, such as “HTTP/1.1 200 OK”, and a message of its own, the body of which is perhaps the requested file, an error message, or some other information.
Resources to be accessed by HTTP are identified using Uniform Resource Identifiers (URIs) (or, more specifically, URLs) using the http: or https URI schemes.

What is a File Transfer Protocol?

FTP or File Transfer Protocol is used to transfer data from one computer to another over the Internet, or through a network.
Specifically, FTP is a commonly used protocol for exchanging files over any network that supports the TCP/IP protocol (such as the Internet or an intranet). There are two computers involved in an FTP transfer: a server and a client. The FTP server, running FTP server software, listens on the network for connection requests from other computers. The client computer, running FTP client software, initiates a connection to the server. Once connected, the client can do a number of file manipulation operations such as uploading files to the server, download files from the server, rename or delete files on the server and so on. Any software company or individual programmer is able to create FTP server or client software because the protocol is an open standard. Virtually every computer platform supports the FTP protocol. This allows any computer connected to a TCP/IP based network to manipulate files on another computer on that network regardless of which operating systems are involved (if the computers permit FTP access). There are many existing FTP client and server programs.

What is a Charge-coupled device?

A specially developed CCD used for ultraviolet imaging in a wire bonded package.
A charge-coupled device (CCD) is an image sensor, consisting of an integrated circuit containing an array of linked, or coupled, light-sensitive capacitors. This device is also known as a Color-Capture Device.

The capacitor perspective is reflective of the history of the development of the CCD and also is indicative of its general mode of operation, with respect to readout, but attempts aimed at optimization of present CCD designs and structures tend towards consideration of the photodiode as the fundamental collecting unit of the CCD. Under the control of an external circuit, each capacitor can transfer its electric charge to one or other of its neighbours. CCDs are used in digital photography and astronomy (particularly in photometry, sensors, medical fluoroscopy, optical and UV spectroscopy and high speed techniques such as lucky imaging).