Categories: Google Glass

Google Glass Glasswares Integration Pattern – Key to Performance

This is well understood that google glass can be integrated with glasswares over google cloud by making use of Google Mirror API. Lets try and understand what is the integration pattern (as of now) that is used for this integration.

Glassware’s Performance: A Key Concern

Before we go into discussion, this is given that performance is one of the most important concerns google glass developers would have to deal with. This is not about something like pages loading on one’s desktop/laptop or iPad where users could afford to wait. As cards appearing on google glass timelines appear directly near one’s eye, the expectation would be to get the operation performed as quickly as possible (in no time). Thus, glasswares have to respond as quickly as possible.

However, one would agree that not all glasswares would be able to respond as quickly as possible at all possible times due to various different reasons, some due to design of the glasswares, some due to infrastructure on which glasswares are deployed etc.

Integration Pattern: Request- Response

At this point, google glass interacts with glasswares via request-response based integration pattern. This means that for each request sent to glassware in form of notification, the response is expected in a pre-defined time duration of 10 sec. If the glasswares do not respond back in 10 seconds, the connection times out. Time-out duration is set to be 10 seconds. Check the notification link to read for yourself.

Then, what should be the best practices to design integration with google mirror API?

While designing the glassware, if it is going to take more than 10 seconds for processing, the best practice is to send the respond right away, and call mirror api to send the appropriate message later.

 

Ajitesh Kumar

I have been recently working in the area of Data analytics including Data Science and Machine Learning / Deep Learning. I am also passionate about different technologies including programming languages such as Java/JEE, Javascript, Python, R, Julia, etc, and technologies such as Blockchain, mobile computing, cloud-native technologies, application security, cloud computing platforms, big data, etc. I would love to connect with you on Linkedin. Check out my latest book titled as First Principles Thinking: Building winning products using first principles thinking.

Share
Published by
Ajitesh Kumar

Recent Posts

Retrieval Augmented Generation (RAG) & LLM: Examples

Last updated: 25th Jan, 2025 Have you ever wondered how to seamlessly integrate the vast…

1 week ago

How to Setup MEAN App with LangChain.js

Hey there! As I venture into building agentic MEAN apps with LangChain.js, I wanted to…

2 weeks ago

Build AI Chatbots for SAAS Using LLMs, RAG, Multi-Agent Frameworks

Software-as-a-Service (SaaS) providers have long relied on traditional chatbot solutions like AWS Lex and Google…

2 weeks ago

Creating a RAG Application Using LangGraph: Example Code

Retrieval-Augmented Generation (RAG) is an innovative generative AI method that combines retrieval-based search with large…

3 weeks ago

Building a RAG Application with LangChain: Example Code

The combination of Retrieval-Augmented Generation (RAG) and powerful language models enables the development of sophisticated…

3 weeks ago

Building an OpenAI Chatbot with LangChain

Have you ever wondered how to use OpenAI APIs to create custom chatbots? With advancements…

3 weeks ago