Your Role and Responsibilities
As a DTT Engineer/Architect, you will guide the technical evaluation phase as well as during the design and development phase in a hands-on environment in the area of Data Platform, Internet of Things (IoT) and Automation, Analytics including AI and Machine Learning, as well as Blockchain. You will be a technical advisor internally to the sales and delivery team, and work with the product (analytics or data) team as an advocate for your customers in the field. You’ll grow as a leader in your field, while finding solutions to our customers’ biggest challenges in big data, IoT, automation, data engineering and data science and analytics problems.
As a Data Engineer or Solution Architect you will provide services to clients in the analytics or data-related solutions and delivery of complex projects/programs for cloud and non-cloud environments, including complex application and/or system integration projects. You will help our customers to achieve tangible data-driven outcomes through the use of Data Engineering frameworks or Data Platform or in the area of Automation and Blockchain, helping data and analytics teams complete projects and integrate our platform into their enterprise Ecosystem.
You will be responsible in terms of stitching together architectural landscape starting from data acquisition, ingestion and transformation before loading the same in the desired data warehouses in form of datamarts as per the requirement. You will also facilitate the process of how the curated data could be consumed by downstream applications in order to meet the business requirement in form of Management Information Systems or Analytics solutions. The solution architect will build architectures & coordinate with other architects to build an end to end prescriptive guidance across network, storage, operating systems, virtualization, RDBMS & NoSQL databases, and mid-tier technologies that include application integration, in-memory caches, and security.
Required Professional and Technical Expertise:
- Has at least 12+ years of (consulting) experience focused on Data and Analytics.
- Have a good understanding of data warehousing, ETL, complex event processing, data engineering, Big Data principles and data visualization,
- Data Sciences, Business Intelligence, Analytics products, etc.
- Experienced in working in a hybrid cloud environment and exposure to Big Data framework is a must.
- Extensive development expertise in Spark and other Big Data processing frameworks (Hadoop, Storm, Kafka, etc.)
Preferred Professional and Technical Expertise:
- Working knowledge of other BI / Analytics / Big Data tools (IBM Cognos, QlikView, HortonWorks, Cloudera, Azure Data Factory, Automation Anywhere, BluePrism) is a plus.
- Good knowledge of Big Data querying tools, such as Pig, Hive, and Impala
- Knowledge of various ETL techniques and frameworks, such as Flume and stream processing systems like Storm or Spark-Streaming