The benefits of Batch Normalization in training are well known for the reduction of internal covariate shift and hence optimizing the training to converge faster. This article tries to bring in a different perspective, where the quantization loss is recovered with the help of Batch Normalization layer, thus retaining the accuracy of the model. The article also gives a simplified implementation of Batch Normalization to reduce the load on edge devices which generally will have constraints on computation of neural network models.

Batch Normalization: A different perspective from Quantized Inference Model

Abstract 

The benefits of Batch Normalization in training are well known for the reduction of internal covariate shift and hence optimizing the training to converge faster. This article tries to bring in a different perspective, where the quantization loss is recovered with the help of Batch Normalization layer, thus retaining the accuracy of the model. The article also gives a simplified implementation of Batch Normalization to reduce the load on edge devices which generally will have constraints on computation of neural network models.

Batch Normalization Theory

During the training of neural network, we have to ensure that the network learns faster. One of the ways to make it faster is by normalizing the inputs to network, along with normalization of intermittent layers of the network. This intermediate layer normalization is what is called Batch Normalization. The Advantage of Batch norm is also that it helps in minimizing internal covariate shift, as described in this paper.

The frameworks like TensorFlow, Keras and Caffe have got the same representation with different symbols attached to it. In general, the Batch Normalization can be described by following math:

Batch Normalization equation

Here the equation (1.1) is a representation of Keras/TensorFlow. whereas equation (1.2) is the representation used by Caffe framework. In this article, the equation (1.1) style is adopted for the continuation of the context.

Now let’s modify the equation (1.1) as below:

Now by observing the equation of (1.4), there remains an option for optimization in reducing number of multiplications and additions. The bias comb (read it as combined bias) factor can be offline calculated for each channel. Also the ratio of “gamma/sqrt(variance)” can be calculated offline and can be used while implementing the Batch norm equation. This equation can be used in Quantized inference model, to reduce the complexity.

Quantized Inference Model

The inference model to be deployed in edge devices, would generally integer arithmetic friendly CPUs, such as ARM Cortex-M/A series processors or FPGA devices. Now to make inference model friendly to the architecture of the edge devices, will create a simulation in Python. And then convert the inference model’s chain of inputs, weights, and outputs into fixed point format. In the fixed point format, Q for 8 bits is chosen to represent with integer.fractional format. This simulation model will help you to develop the inference model faster on the device and also will help you to evaluate the accuracy of the model.

e.g: Q2.6 represents 6 bits of fractional and 2 bits of an integer.

Now the way to represent the Q format for each layer is as follows:

  1. Take the Maximum and Minimum of inputs, outputs, and each layer/weights.
  2. Get the fractional bits required to represent the Dynamic range (by using Maximum/Minimum) is as below using Python function:

def get_fract_bits(tensor_float):
  # Assumption is that out of 8 bits, one bit is used as sign
  fract_dout = 7 - np.ceil(np.log2(abs(tensor_float).max()))

  fract_dout = fract_dout.astype('int8')

  return fract_dout

  1. Now the integer bits are 7-fractional_bits, as one bit is reserved for sign representation.
    4. To start with perform this on input and then followed by Layer 1, 2 …, so on.
    5. Do the quantization step for weights and then for the output assuming one example of input. The assumption is made that input is normalized so that we can generalize the Q format, otherwise, this may lead to some loss in data when non-normalized different input gets fed.
    6. This will set Q format for input, weights, and outputs.

Example:
Let’s consider Resnet-50 as a model to be quantized. Let’s use Keras inbuilt Resnet-50 trained with Imagenet.

#Creating the model

def model_create():

    model = tf.compat.v1.keras.applications.resnet50.ResNet50(

    include_top=True,

    weights='imagenet',

    input_tensor=None,

    input_shape=None,

pooling=None,

    classes=1000)

    return model

Let’s prepare input for resnet-50. The below image is taken from ImageNet dataset.

Elephant Image from Imgenet data

def prepare_input():

      img = image.load_img(

            "D:\\Elephant_water.jpg",

            target_size=(224,224)

            )

      x_test = image.img_to_array(img)

      x_test = np.expand_dims(x_test,axis=0)

      x = preprocess_input(x_test) # from tensorflow.compat.v1.keras.applications.resnet50 import preprocess_input, decode_predictions

      return x

Now Lets call the above two functions and find out the Q format for input.

      model = model_create()

      x     = prepare_input()

If you observe the input ‘x’, its dynamic range is between -123.68 to 131.32. This makes it hard for fitting in 8 bits, as we only have 7 bits to represent these numbers, considering one sign bit. Hence the Q Format for this input would become, Q8.0, where 7 bits are input numbers and 1 sign bit. Hence it clips the data between -128 to +127 (-2⁷ to 2⁷ -1). so we would be loosing some data in this input quantization conversion (most obvious being 131.32 is clipped to 127), whose loss can be seen by Signal to Quantize Noise Ratio , which will be described soon below.

If you follow the same method for each weight and outputs of the layers, we will have some Q format which we can fix to simulate the quantization.

       # Lets get first layer properties

       (padding, _) = model.layers[1].padding

       # lets get second layer properties

       wts = model.layers[2].get_weights()

       strides = model.layers[2].strides

       W=wts[0]

       b=wts[1]

       hparameters =dict(

                    pad=padding[0],

                    stride=strides[0]

                   )

         # Lets Quantize the weights .

         quant_bits = 8 # This will be our data path.

         wts_qn,wts_bits_fract = Quantize(W,quant_bits) # Both weights and biases will be quantized with wts_bits_fract.

         # lets quantize Bias also at wts_bits_fract

         b_qn = (np.round(b *(2<<wts_bits_fract))).astype('int8')

         names_model,names_pair = getnames_layers(model)

         layers_op = get_each_layers(model,x,names_model)

         quant_bits = 8

         print("Running conv2D")

         # Lets extract the first layer output from convolution block.

         Z_fl = layers_op[2] # This Number is first convolution.

         # Find out the maximum bits required for final convolved value.

         fract_dout = get_fract_bits(Z_fl)

         fractional_bits = [0,wts_bits_fract,fract_dout]

         # Quantized convolution here.
         Z, cache_conv = conv_forward(

                         x.astype('int8'),

                         wts_qn,

                         b_qn[np.newaxis,np.newaxis,np.newaxis,...],

                         hparameters,

                         fractional_bits)

Now if you observe the above snippet of code, the convolution operation will take input, weights, and output with its fractional bits defined.
i.e: fractional_bits=[0,7,-3]
where 1st element represents 0 bits for fractional representation of input (Q8.0)
2nd element represents 7 bits for fractional representation of weights (Q1.7).
3rd element represents -3 bits for the fractional representation of outputs (Q8.0, but need additional 3 bits for integer representation as the range is beyond 8-bit representation).

This will have to repeat for each layer to get the Q format.

Now the quantization familiarity is established, we can move to the impact of this quantization on SQNR and hence accuracy.

Signal to Quantization Noise Ratio

As we have reduced the dynamic range from floating point representation to fixed point representation by using Q format, we have discretized the values to nearest possible integer representation. This introduces the quantization noise, which can be quantified mathematically by Signal to Quantization noise ratio.(refer: https://en.wikipedia.org/wiki/Signal-to-quantization-noise_ratio)

As shown in the above equation, we will measure the ratio of signal power to noise power. This representation applied on log scale converts to dB (10log10SQNR). Here signal is floating point input which we are quantizing to nearest integer and noise is Quantization noise.
example: The elephant example of input has maximum value of 131.32, but we are representing this to nearest integer possible, which is 127. Hence it makes Quantization noise = 131.32–127 = 4.32.
So SQNR = 131.32² /4.32² = 924.04, which is 29.66 db, indicating that we have only attained close to 30dB as compared to 48dB (6*no_of_bits) possibility.

This reflection of SQNR on accuracy can be established for each individual network depending on structure. But indirectly we can say better the SQNR the higher is the accuracy.

Convolution in Quantized environments:

The convolution operation in CNN is well known, where we multiply the kernel with input and accumulate to get the results. In this process we have to remember that we are operating with 8 bits as inputs , hence the result of multiplication need at least 16 bits and then accumulating it in 32 bits accumulator, which would help to maintain the precision of the result. Then result is rounded or truncated to 8 bits to carry 8 bit width of data.

def conv_single_step_quantized(a_slice_prev, W, b,ip_fract,wt_fract,fract_dout):

 """

 Apply one filter defined by parameters W on a single slice (a_slice_prev) of the output     activation

 of the previous layer.

 Arguments:

 a_slice_prev -- slice of input data of shape (f, f, n_C_prev)

 W -- Weight parameters contained in a window - matrix of shape (f, f, n_C_prev)

 b -- Bias parameters contained in a window - matrix of shape (1, 1, 1)

Returns:

 Z -- a scalar value, result of convolving the sliding window (W, b) on a slice x of the     input data

"""

       # Element-wise product between a_slice and W. Do not add the bias yet.

       s = np.multiply(a_slice_prev.astype('int16'),W) # Let result be held in 16 bit

       # Sum over all entries of the volume s.

       Z = np.sum(s.astype('int32')) # Final result be stored in int32.

       # The Result of 32 bit is to be trucated to 8 bit to restore the data path.

       # Add bias b to Z. Cast b to a float() so that Z results in a scalar value.

       # Bring bias to 32 bits to add to Z.

       Z = Z + (b << ip_fract).astype('int32')

       # Lets find out how many integer bits are taken during addition.

       # You can do this by taking leading no of bits in C/Assembly/FPGA programming

       # Here lets simulate

       Z = Z >> (ip_fract+wt_fract - fract_dout)

       if(Z > 127):

             Z = 127

       elif(Z < -128):

             Z = -128

       else:

            Z = Z.astype('int8')

       return Z

The above code is inspired from AndrewNg’s deep learning specialization course, where convolution from scratch is taught. Then modified the same to fit for Quantization.

Batch Norm in Quantized environment

As shown in Equation 1.4, we have modified representation to reduce complexity and perform the Batch normalization.The code below shows the same implementation.

def calculate_bn(x,bn_param,Bn_fract_dout):

        x_ip = x[0]
        x_fract_bits = x[1]

       bn_param_gamma_s = bn_param[0][0]

       bn_param_fract_bits = bn_param[0][1]

       op = x_ip*bn_param_gamma_s.astype(np.int16) # x*gamma_s

       # This output will have x_fract_bits + bn_param_fract_bits

       fract_bits =x_fract_bits + bn_param_fract_bits

       bn_param_bias = bn_param[1][0]

       bn_param_fract_bits = bn_param[1][1]

       bias = bn_param_bias.astype(np.int16)

       # lets adjust bias to fract bits

       bias = bias << (fract_bits - bn_param_fract_bits)

       op = op + bias # + bias

       # Convert this op back to 8 bits, with Bn_fract_dout as fractional bits

       op = op >> (fract_bits - Bn_fract_dout)

       BN_op = op.astype(np.int8)

       return BN_op

Now with these pieces in place for the Quantization inference model, we can see now the Batch norm impact on quantization.

Results

The Resnet-50 trained with ImageNet is used for python simulation to quantize the inference model. From the above sections, we bind the pieces together to only analyze the first convolution followed by Batch Norm layer.

The convolution operation is the heaviest of the network in terms of complexity and also in maintaining accuracy of the model. So let’s look at the Convolution data after we quantized it to 8 bits. The below figure on the left-hand side represents the convolution output of 64 channels (or filters applied) output whose mean value is taken for comparison. The Blue color is float reference and the green color is Quantized implementation. The difference plot (Left-Hand side) gives an indication of how much variation exists between float and quantized one. The line drawn in that Difference figure is mean, whose value is around 4. which means we are getting on an average difference between float and Quantized values close to a value of 4.

        Convolution and Batch Norm Outputs

Now let’s look at Right-Hand side figure, which is Batch Normalization section. As you can see the Green and blue curves are so close by and their differences range is shrunk to less than 0.5 range. The Mean line is around 0.135, which used to be around 4 in the case of convolution. This indicates we are reducing our differences between float and quantized implementation from mean of 4 to 0.135 (almost close to 0).

Now let’s look at the SQNR plot to appreciate the Batch Norm impact.

                    Signal to Quantization Noise Ratio for sequence of layers

Just in case values are not visible, we have following SQNR numbers
Input SQNR : 25.58 dB (The Input going in to Model)
Convolution SQNR : -4.4dB (The output of 1st convolution )
Batch-Norm SQNR : 20.98 dB (The Batch Normalization output)

As you can see the input SQNR is about 25.58dB , which gets reduced to -4.4 dB indicating huge loss here, because of limitation in representation beyond 8 bits. But the Hope is not lost, as Batch normalization helps to recover back your SQNR to 20.98 dB bringing it close to input SQNR.

Conclusion

  1. Batch Normalization helps to correct the Mean, thus regularizing the quantization variation across the channels.
  2. Batch Normalization recovers the SQNR. As seen from above demonstration, we see a recovery of SQNR as compared to convolution layer.
  3. If the quantized inference model on edge is desirable, then consider including Batch Normalization as it acts as recovery of quantization loss and also helps in maintaining the accuracy, along with training benefits of faster convergence.
  4. Batch Normalization complexity can be reduced by using (1.4) so that many parameters can be computed offline to reduce load on the edge device.

References

Leave a Comment

Your email address will not be published. Required fields are marked *

seven + seven =

Scroll to Top

Human Pose Detection & Classification

Some Buildings in a city

Features:

  • Suitable for real time detection on edge devices
  • Detects human pose / key points and recognizes movement / behavior
  • Light weight deep learning models with good accuracy and performance

Target Markets:

  • Patient Monitoring in Hospitals
  • Surveillance
  • Sports/Exercise Pose Estimation
  • Retail Analytics

OCR / Pattern Recognition

Some Buildings in a city

Use cases :

  • Analog dial reading
  • Digital meter reading
  • Label recognition
  • Document OCR

Highlights :

  • Configurable for text or pattern recognition
  • Simultaneous Analog and Digital Dial reading
  • Lightweight implementation

Behavior Monitoring

Some Buildings in a city

Use cases :

  • Fall Detection
  • Social Distancing

Highlights :

  • Can define region of interest to monitor
  • Multi-subject monitoring
  • Multi-camera monitoring
  • Alarm triggers

Attire & PPE Detection

Some Buildings in a city

Use cases :

  • PPE Checks
  • Disallowed attire checks

Use cases :

  • Non-intrusive adherence checks
  • Customizable attire checks
  • Post-deployment trainable

 

Request for Video





    Real Time Color Detection​

    Use cases :

    • Machine vision applications such as color sorter or food defect detection

    Highlights :

    • Color detection algorithm with real time performance
    • Detects as close to human vison as possible including color shade discrimination
    • GPGPU based algorithm on NVIDIA CUDA and Snapdragon Adreno GPU
    • Extremely low latency (a few 10s of milliseconds) for detection
    • Portable onto different hardware platforms

    Missing Artifact Detection

    Use cases :

    • Detection of missing components during various stages of manufacturing of industrial parts
    • Examples include : missing nuts and bolts, missing ridges, missing grooves on plastic and metal blocks

    Highlights :

    • Custom neural network and algorithms to achieve high accuracy and inference speed
    • Single-pass detection of many categories of missing artifacts
    • In-field trainable neural networks with dynamic addition of new artifact categories
    • Implementation using low cost cameras and not expensive machine-vision cameras
    • Learning via the use of minimal training sets
    • Options to implement the neural network on GPU or CPU based systems

    Real Time Manufacturing Line Inspection

    Use cases :

    • Detection of defects on the surface of manufactured goods (metal, plastic, glass, food, etc.)
    • Can be integrated into the overall automated QA infrastructure on an assembly line.

    Highlights :

    • Custom neural network and algorithms to achieve high accuracy and inference speed
    • Use of consumer or industrial grade cameras
    • Requires only a few hundred images during the training phase
    • Supports incremental training of the neural network with data augmentation
    • Allows implementation on low cost GPU or CPU based platforms

    Ground Based Infrastructure analytics

    Some Buildings in a city

    Use cases :

    • Rail tracks (public transport, mining, etc.)
    • Highways
    • Tunnels

    Highlights :

    • Analysis of video and images from 2D & 3D RGB camera sensors
    • Multi sensor support (X-ray, thermal, radar, etc.)
    • Detection of anomalies in peripheral areas of core infrastructure (Ex: vegetation or stones near rail tracks)

    Aerial Analytics

    Use cases :

    • Rail track defect detection
    • Tower defect detection: Structural analysis of Power
      transmission towers
    • infrastructure mapping

    Highlights :

    • Defect detection from a distance
    • Non-intrusive
    • Automatic video capture with perfectly centered ROI
    • No manual intervention is required by a pilot for
      camera positioning

    SANJAY JAYAKUMAR

    Co-founder & CEO

     

    Founder and Managing director of Ignitarium, Sanjay has been responsible for defining Ignitarium’s core values, which encompass the organisation’s approach towards clients, partners, and all internal stakeholders, and in establishing an innovation and value-driven organisational culture.

     

    Prior to founding Ignitarium in 2012, Sanjay spent the initial 22 years of his career with the VLSI and Systems Business unit at Wipro Technologies. In his formative years, Sanjay worked in diverse engineering roles in Electronic hardware design, ASIC design, and custom library development. Sanjay later handled a flagship – multi-million dollar, 600-engineer strong – Semiconductor & Embedded account owning complete Delivery and Business responsibility.

     

    Sanjay graduated in Electronics and Communication Engineering from College of Engineering, Trivandrum, and has a Postgraduate degree in Microelectronics from BITS Pilani.

     

    Request Free Demo




      RAMESH EMANI Board Member

      RAMESH EMANI

      Board Member

      Ramesh was the Founder and CEO of Insta Health Solutions, a software products company focused on providing complete hospital and clinic management solutions for hospitals and clinics in India, the Middle East, Southeast Asia, and Africa. He raised Series A funds from Inventus Capital and then subsequently sold the company to Practo Technologies, India. Post-sale, he held the role of SVP and Head of the Insta BU for 4 years. He has now retired from full-time employment and is working as a consultant and board member.

       

      Prior to Insta, Ramesh had a 25-year-long career at Wipro Technologies where he was the President of the $1B Telecom and Product Engineering Solutions business heading a team of 19,000 people with a truly global operations footprint. Among his other key roles at Wipro, he was a member of Wipro's Corporate Executive Council and was Chief Technology Officer.

       

      Ramesh is also an Independent Board Member of eMIDs Technologies, a $100M IT services company focused on the healthcare vertical with market presence in the US and India.

       

      Ramesh holds an M-Tech in Computer Science from IIT-Kanpur.

      ​Manoj Thandassery

      VP – Sales & Business Development

      Manoj Thandassery is responsible for the India business at Ignitarium. He has over 20 years of leadership and business experience in various industries including the IT and Product Engineering industry. He has held various responsibilities including Geo head at Sasken China, Portfolio head at Wipro USA, and India & APAC Director of Sales at Emeritus. He has led large multi-country teams of up to 350 employees. Manoj was also an entrepreneur and has successfully launched and scaled, via multiple VC-led investment rounds, an Edtech business in the K12 space that was subsequently sold to a global Edtech giant.
      An XLRI alumnus, Manoj divides his time between Pune and Bangalore.

       

      MALAVIKA GARIMELLA​

      General Manager - Marketing

      A professional with a 14-year track record in technology marketing, Malavika heads marketing in Ignitarium. Responsible for all branding, positioning and promotional initiatives in the company, she has collaborated with technical and business teams to further strengthen Ignitarium's positioning as a key E R&D services player in the ecosystem.

      Prior to Ignitarium, Malavika has worked in with multiple global tech startups and IT consulting companies as a marketing consultant. Earlier, she headed marketing for the Semiconductor & Systems BU at Wipro Technologies and worked at IBM in their application software division.

      Malavika completed her MBA in Marketing from SCMHRD, Pune, and holds a B.E. degree in Telecommunications from RVCE, Bengaluru.

       

      PRADEEP KUMAR LAKSHMANAN

      VP - Operations

      Pradeep comes with an overall experience of 26 years across IT services and Academia. In his previous role at Virtusa, he played the role of Delivery Leader for the Middle East geography. He has handled complex delivery projects including the transition of large engagements, account management, and setting up new delivery centers.

      Pradeep graduated in Industrial Engineering and Management, went on to secure an MBA from CUSAT, and cleared UGN Net in Management. He also had teaching stints at his alma mater, CUSAT, and other management institutes like DCSMAT. A certified P3O (Portfolio, Program & Project Management) from the Office of Government Commerce, UK, Pradeep has been recognized for key contributions in the Management domain, at his previous organizations, Wipro & Virtusa.

      In his role as the Head of Operations at Ignitarium, Pradeep leads and manages operational functions such as Resource Management, Procurement, Facilities, IT Infrastructure, and Program Management office.

       

      SONA MATHEW Director – Human Resources

      SONA MATHEW

      AVP – Human Resources

      Sona heads Human Resource functions - Employee Engagement, HR Operations and Learning & Development – at Ignitarium. Her expertise include deep and broad experience in strategic people initiatives, performance management, talent transformation, talent acquisition, people engagement & compliance in the Information Technology & Services industry.

       

      Prior to Ignitarium, Sona has had held diverse HR responsibilities at Litmus7, Cognizant and Wipro.

       

      Sona graduated in Commerce from St. Xaviers College and did her MBA in HR from PSG College of Technology.

       

      ASHWIN RAMACHANDRAN

      Vice President - Sales

      As VP of Sales, Ashwin is responsible for Ignitarium’s go-to-market strategy, business, client relationships, and customer success in the Americas. He brings in over a couple of decades of experience, mainly in the product engineering space with customers from a wide spectrum of industries, especially in the Hi-Tech/semiconductor and telecom verticals.

       

      Ashwin has worked with the likes of Wipro, GlobalLogic, and Mastek, wherein unconventional and creative business models were used to bring in non-linear revenue. He has strategically diversified, de-risked, and grown his portfolios during his sales career.

       

      Ashwin strongly believes in the customer-first approach and works to add value and enhance the experiences of our customers.

       

      AZIF SALY Director – Sales

      AZIF SALY

      Vice President – Sales & Business Development

      Azif is responsible for go-to-market strategy, business development and sales at Ignitarium. Azif has over 14 years of cross-functional experience in the semiconductor product & service spaces and has held senior positions in global client management, strategic account management and business development. An IIM-K alumnus, he has been associated with Wipro, Nokia and Sankalp in the past.

       

      Azif handled key accounts and sales process initiatives at Sankalp Semiconductors. Azif has pursued entrepreneurial interests in the past and was associated with multiple start-ups in various executive roles. His start-up was successful in raising seed funds from Nokia, India. During his tenure at Nokia, he played a key role in driving product evangelism and customer success functions for the multimedia division.

       

      At Wipro, he was involved in customer engagement with global customers in APAC and US.

       

      RAJU KUNNATH Vice President – Enterprise & Mobility

      RAJU KUNNATH

      Distinguished Engineer – Digital

      At Ignitarium, Raju's charter is to architect world class Digital solutions at the confluence of Edge, Cloud and Analytics. Raju has over 25 years of experience in the field of Telecom, Mobility and Cloud. Prior to Ignitarium, he worked at Nokia India Pvt. Ltd. and Sasken Communication Technologies in various leadership positions and was responsible for the delivery of various developer platforms and products.

       

      Raju graduated in Electronics Engineering from Model Engineering College, Cochin and has an Executive Post Graduate Program (EPGP) in Strategy and Finance from IIM Kozhikode.

       

      PRADEEP SUKUMARAN Vice President – Business Strategy & Marketing

      PRADEEP SUKUMARAN

      Vice President - Software Engineering

      Pradeep heads the Software Engineering division, with a charter to build and grow a world-beating delivery team. He is responsible for all the software functions, which includes embedded & automotive software, multimedia, and AI & Digital services

      At Ignitarium, he was previously part of the sales and marketing team with a special focus on generating a sales pipeline for Vision Intelligence products and services, working with worldwide field sales & partner ecosystems in the U.S  Europe, and APAC.

      Prior to joining Ignitarium in 2017, Pradeep was Senior Solutions Architect at Open-Silicon, an ASIC design house. At Open-Silicon, where he spent a good five years, Pradeep was responsible for Front-end, FPGA, and embedded SW business development, marketing & technical sales and also drove the IoT R&D roadmap. Pradeep started his professional career in 2000 at Sasken, where he worked for 11 years, primarily as an embedded multimedia expert, and then went on to lead the Multimedia software IP team.

      Pradeep is a graduate in Electronics & Communication from RVCE, Bangalore.

       

      SUJEET SREENIVASAN Vice President – Embedded

      SUJEET SREENIVASAN

      Vice President – Automotive Technology

       

      Sujeet is responsible for driving innovation in Automotive software, identifying Automotive technology trends and advancements, evaluating their potential impact, and development of solutions to meet the needs of our Automotive customers.

      At Ignitarium, he was previously responsible for the growth and P&L of the Embedded Business unit focusing on Multimedia, Automotive, and Platform software.

      Prior to joining Ignitarium in 2016, Sujeet has had a career spanning more than 16 years at Wipro. During this stint, he has played diverse roles from Solution Architect to Presales Lead covering various domains. His technical expertise lies in the areas of Telecom, Embedded Systems, Wireless, Networking, SoC modeling, and Automotive. He has been honored as a Distinguished Member of the Technical Staff at Wipro and has multiple patents granted in the areas of Networking and IoT Security.

      Sujeet holds a degree in Computer Science from Government Engineering College, Thrissur.

       

      RAJIN RAVIMONY Distinguished Engineer

      RAJIN RAVIMONY

      Distinguished Engineer

       

      At Ignitarium, Rajin plays the role of Distinguished Engineer for complex SoCs and systems. He's an expert in ARM-based designs having architected more than a dozen SoCs and played hands-on design roles in several tens more. His core areas of specialization include security and functional safety architecture (IEC61508 and ISO26262) of automotive systems, RTL implementation of math intensive signal processing blocks as well as design of video processing and related multimedia blocks.

       

      Prior to Ignitarium, Rajin worked at Wipro Technologies for 14 years where he held roles of architect and consultant for several VLSI designs in the automotive and consumer domains.

       

      Rajin holds an MS in Micro-electronics from BITS Pilani.

       

      SIBY ABRAHAM Executive Vice President, Strategy

      SIBY ABRAHAM

      Executive Vice President, Strategy

       

      As EVP, of Strategy at Ignitarium, Siby anchors multiple functions spanning investor community relations, business growth, technology initiatives as well and operational excellence.

       

      Siby has over 31 years of experience in the semiconductor industry. In his last role at Wipro Technologies, he headed the Semiconductor Industry Practice Group where he was responsible for business growth and engineering delivery for all of Wipro’s semiconductor customers. Prior to that, he held a vast array of crucial roles at Wipro including Chief Technologist & Vice President, CTO Office, Global Delivery Head for Product Engineering Services, Business Head of Semiconductor & Consumer Electronics, and Head of Unified Competency Framework. He was instrumental in growing Wipro’s semiconductor business to over $100 million within 5 years and turning around its Consumer Electronics business in less than 2 years. In addition, he was the Engineering Manager for Enthink Inc., a semiconductor IP-focused subsidiary of Wipro. Prior to that, Siby was the Technical Lead for several of the most prestigious system engineering projects executed by Wipro R&D.

       

      Siby has held a host of deeply impactful positions, which included representing Wipro in various World Economic Forum working groups on Industrial IOT and as a member of IEEE’s IOT Steering Committee.

       

      He completed his MTech. in Electrical Engineering (Information and Control) from IIT, Kanpur and his BTech. from NIT, Calicut

       

      SUJEETH JOSEPH Chief Product Officer

      SUJEETH JOSEPH

      Chief Technology Officer

       

      As CTO, Sujeeth is responsible for defining the technology roadmap, driving IP & solution development, and transitioning these technology components into practically deployable product engineering use cases.

       

      With a career spanning over 30+ years, Sujeeth Joseph is a semiconductor industry veteran in the SoC, System and Product architecture space. At SanDisk India, he was Director of Architecture for the USD $2B Removable Products Group. Simultaneously, he also headed the SanDisk India Patenting function, the Retail Competitive Analysis Group and drove academic research programs with premier Indian academic Institutes. Prior to SanDisk, he was Chief Architect of the Semiconductor & Systems BU (SnS) of Wipro Technologies. Over a 19-year career at Wipro, he has played hands-on and leadership roles across all phases of the ASIC and System design flow.

       

      He graduated in Electronics Engineering from Bombay University in 1991.

       

      SUJITH MATHEW IYPE Co-founder & CTO

      SUJITH MATHEW IYPE

      Co-founder & COO

       

      As Ignitarium's Co-founder and COO, Sujith is responsible for driving the operational efficiency and streamlining process across the organization. He is also responsible for the growth and P&L of the Semiconductor Business Unit.

       

      Apart from establishing a compelling story in VLSI, Sujith was responsible for Ignitarium's foray into nascent technology areas like AI, ML, Computer Vision, and IoT, nurturing them in our R&D Lab - "The Crucible".

       

      Prior to founding Ignitarium, Sujith played the role of a VLSI architect at Wipro Technologies for 13 years. In true hands-on mode, he has built ASICs and FPGAs for the Multimedia, Telecommunication, and Healthcare domains and has provided technical leadership for many flagship projects executed by Wipro.

       

      Sujith graduated from NIT - Calicut in the year 2000 in Electronics and Communications Engineering and thereafter he has successfully completed a one-year executive program in Business Management from IIM Calcutta.

       

      RAMESH SHANMUGHAM Co-founder & COO

      RAMESH SHANMUGHAM

      Co-founder & CRO

      As Co-founder and Chief Revenue Officer of Ignitarium, Ramesh has been responsible for global business and marketing as well as building trusted customer relationships upholding the company's core values.

      Ramesh has over 25 years of experience in the Semiconductor Industry covering all aspects of IC design. Prior to Ignitarium, Ramesh was a key member of the senior management team of the semiconductor division at Wipro Technologies. Ramesh has played key roles in Semiconductor Delivery and Pre-sales at a global level.

      Ramesh graduated in Electronics Engineering from Model Engineering College, Cochin, and has a Postgraduate degree in Microelectronics from BITS Pilani.