QB Cse3348 Genrative Ai-1
QB Cse3348 Genrative Ai-1
Question Bank
S.No Module # Question Bloom's levendicative Marks
1 1 Identify key milestones in the development of generative models. L1 2
2 1 State the evolution and applications of generative models. L1 2
3 1 Identify various fields were generative models are applied. L1 2
Apply real-world case studies to identify challenges and solutions in applying
4 1 L3 10
generative models.
5 1 Generalize the basic concepts of LLMs L1 2
Implement small-scale versions of generative models using Python and
6 1 L3 10
TensorFlow.
7 1 Identify the strengths and weakness of different generative models. L1 2
Identify the strengths and weaknesses to determine suitable
8 1 L1 2
applications.
9 1 List common attributes and differences between various generative L1 2
Illustrate the performance and applicability of different generative
10 1 L3 10
models in specific scenarios.
11 1 Define the concept of prompt engineering in generative AI L1 2
Express the significance of prompt engineering in optimizing
12 1 L2 5
generative AI models.
13 2 Define the architecture and working principles of RNNs and LSTMs L1 2
Relate the limitations of RNNs & LSTMs in sequence generation tasks
14 2 L3 10
and propose alternatives.
15 2 Define the architecture and key components of transformer models. L1 2
Evaluate the performance of transformer models over RNN and other
16 2 L3 10
sequence generation models.
Define the architecture and key components of transformer
17 2 L1 2
models:BERT,GPT.
Evaluate the performance of transformer models of BERT,GPT over
18 2 L3 10
other sequence generation models.
Describe the process of fine-tuning pre-trained models for LLMs
19 2 L1 2
generative tasks.
Estimate the effectiveness of fine-tuning in improving performance on
20 2 L2 5
specific generative tasks
Discuss the role of OpenAI’s pre-trained Transformers, such as
21 2 L2 5
ChatGPT, GPT3.5 and GPT 4 in text generation tasks.
Evaluate the performance and capabilities of OpenAI’s pre-trained
22 2 L3 10
transformers in text generation.
23 2 Discuss the concept of LLMs with its limitations. L2 5
Explain the performance and limitations with lack of Hallucination
24 2 L2 5
Risks
25 2 Classify the techniques of chain and retrieval augmentation in LLMs. L2 5
26 2 Identify the strategies to mitigate common challenges encountered. L2 5
Review the impact of workflow on advancing state-of-art in the LLM
27 2 L2 5
applications.
28 3 Identify the variants of Langchain in generative models. L1 2
Classify the Langchain types with strengths and weakness to determine
29 3 L2 5
appropriate generative models.
30 3 Explain the key components of Langchain. L1 2
31 3 Classify the variant components of Langchain. L2 5
32 3 Generalize the concept of information retrieval methodologies. L1 2
Estimate the impact of agents that influence the information retrieval
33 3 L2 5
and tools that demonstrate the impact.
34 3 Discuss the process of RaLM in various artistic creative applications. L2 5
Analyse strategies to mitigate common understanding of RaLM and
35 3 L2 5
vectors.
36 3 Discuss the concepts of embeddings in RaLM. L2 5
37 3 Explain the role of Embeddings in Language models. L2 5
38 3 Explain the concept of RALM for Information Retrieval L2 5
Evaluate the strategies in representing storage, indexing and the
39 3 L3 10
impact of vector libraries in language models.
40 3 Describe standard practices and guidelines of using Chatbot. L2 5
Evaluate the ethical considerations of using memory and
41 3 L3 10
conversational buffer in Chatbot.
Define the components of GAN architecture including Generator and
42 4 L1 2
Discriminator.
Analyse the interplay between Generator and Discriminator in GAN
43 4 L2 5
architecture.
Discuss the concept of style transfer using GANs for image
44 4 L2 5
manipulation.
Express the effectiveness of GANs in achieving style transfer across
45 4 L2 5
different domains.
46 4 Identify the various applications of GANs in image and text generation. L1 2
Predict impact of GANs on advancing state-of-the-art in image and text
47 4 L3 10
generation.
48 4 Discuss the concept of VAEs and its variants. L2 5
Explain the trade-offs between VAEs and other generative models in
49 4 L2 5
terms of latent space representation and model complexity.
Discuss the underlying principle architecture and its components and
50 4 L2 5
their role in generating modelling.
Analyse the role of various tools in learning latent text -to-image
51 4 L2 5
representations and generating outputs using stable diffusion.
Outline the training process and optimization techniques for
52 4 L2 5
Text-to-image generation.
Practice the effectiveness of parameter tuning and training
53 4 L3 10
text-to-image generations using Dall E and Midjourney
State the significance of Image-to-image generation in training
54 4 L2 5
generative models.
Manipulate the custom models to train of image-to-image generation
55 4 L3 10
tasks.
56 4 Define Multi-modal generative models using whisper. L1 2
57 4 Implement speech-to-text generation models using whisper L3 10