WNCG

WNCG Alumni and Faculty Win First-of-its-Kind IEEE CommSoc Award

WNCG alumni, Prof. Harpreet S. Dhillon and Dr. Radha Krishna Ganti, along with WNCG Profs. Jeffrey Andrews and François Baccelli, recently received the 2015 IEEE Communications Society Young Author Best Paper Award. They received the award for their paper entitled “Modeling and Analysis of K-Tier Downlink Heterogeneous Cellular Networks.”

The first of its kind, the Young Author Best Paper Award covers all publications of the IEEE Communications Society, which includes eight monthly or bi-monthly magazines and 23 multi-annual journal publications. 

UAV Expert Todd Humphreys Testifies Before Congress

Last month, Todd Humphreys, an assistant professor in the Department of Aerospace Engineering and Engineering Mechanics and WNCG, testified before the U.S. House Committee on Homeland Security's Subcommittee on Oversight and Management Efficiency. Humphreys was asked to speak at the hearing, "Unmanned Aerial System Threats: Exploring Security Implications and Mitigation Technologies," because of his expertise in unmanned aerial vehicles, UAVs.

Mac Channels

In a joint work with Venkat Anantharam from UC Berkeley, WNCG Prof. François Baccelli  derived the capacity region in the Poltyrev sense of the dimension matched MAC channel. They gave a representation of the error probabilities for each subset of transmitters based on Palm theory, and random coding exponents for each type of error event in the case without power constraints, in the case of independent and identically distributed Gaussian noise, with arbitrary positive definite covariance matrix at each time.

Motion Silencing of Flicker Distortions on Naturalistic Videos

Prof. Alan Bovik and his student Lark Kwon Choi in the WNCG Laboratory for Image and Video Engineering (LIVE) and Prof. Lawrence Cormack in the Center for Perceptual Systems (CPS) in the Department of Psychology study the influence of motion on the visibility of flicker distortions in naturalistic videos.

Quality-Energy Aware Synthesis of Approximate Hardware

Approximate computing is an aggressive design technique aimed at achieving significant energy savings by trading off computational precision and accuracy in inherently error-tolerant applications. This introduces a new notion of quality as a fundamental design parameter. While ad-hoc solutions have been explored at various levels, systematic design approaches are lacking.

The Burden of Risk Aversion in Selfish Routing

Traffic congestion aggravates the daily life of millions of people around the globe and congestion games from game theory provide a suitable tool to understand its effects and offer insights on how to alleviate it.  Classic congestion games assume deterministic edge delays, while in reality delays are uncertain and risk-averse drivers might prefer longer but safer routes, further exacerbating the problem of increased travel times and emissions.

Modeling and Algorithms for Aggregated Data

Databases in domains such as healthcare are routinely released to the public in aggregated form to preserve privacy. However, naive application of existing modeling techniques on aggregated data is affected by ecological fallacy that can drastically reduce the accuracy of results and often lead to misleading inferences at the individual level. The project by Prof.

Detecting Sponsored Recommendations

With a vast number of items, web-pages, and news to choose from, online services and the customers both benefit tremendously from personalized recommender systems. Such sys- tems however provide great opportunities for targeted adver- tisements, by displaying ads alongside genuine recommendations. We consider a biased recommendation system where such ads are displayed without any tags (disguised as genuine recommendations), rendering them indistinguishable to a single user. We ask whether it is possible for a small subset of collaborating users to detect such a bias.

Scheduling for Stream Computing in the Cloud

Motivated by emerging big streaming data processing paradigms (e.g., Twitter Storm, Streaming MapReduce), we investigate the problem of scheduling graphs over a large cluster of servers. Each graph is a job, where nodes represent compute tasks and edges indicate data-flows between these compute tasks. Jobs (graphs) arrive randomly over time, and upon completion, leave the system. When a job arrives, the scheduler needs to partition the graph and distribute it over the servers to satisfy load balancing and cost considerations.

Pages

Subscribe to RSS - WNCG