In a typical Wireless Sensor Network (WSN) application, sensor nodes gather data from the environment and convey the collected data towards the base station. It is possible to perform certain signal processing operations on raw data on each sensor node before transmission so that the amount of transmitted data bits is reduced. The amount of transmitted data usually depends on how much processing is performed on each node. Less processing results in more data to be transmitted and vice versa. However, more complex computation operations dissipate more energy. Hence, utilization of signal processing operations should be evaluated carefully by considering both their computation costs and the amount of data reduction they achieve. It is also possible to employ different signal processing techniques at different nodes, hence, optimal assignment of signal processing algorithms can be assessed at the network-level (i.e., all nodes adopts a single signal processing technique during the entire lifetime) or at the node-level (i.e., allowing different nodes to implement different solutions during lifetime). In this study, we develop a novel Mixed Integer Programming (MIP) framework to quantitatively investigate the effects of utilizing traditional transform coding (TC) based and compressive sensing (CS) based signal processing techniques (network-level and node-level) on WSN lifetime. We explore the parameter space consisting of network size, node density, and signal sparsity level through the numerical evaluations of the proposed novel MIP model.